Test Report: Hyper-V_Windows 18649

                    
                      7e28b54b3772a78cf87e91422424e940246c9ed2:2024-04-16:34054
                    
                

Test fail (30/195)

Order failed test Duration
36 TestAddons/Setup 198.49
41 TestForceSystemdEnv 423.98
47 TestErrorSpam/setup 197.38
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 30.16
78 TestFunctional/parallel/ConfigCmd 1.69
136 TestFunctional/parallel/ServiceCmd/HTTPS 15.02
139 TestFunctional/parallel/ServiceCmd/Format 15.04
141 TestFunctional/parallel/ServiceCmd/URL 15.02
148 TestMultiControlPlane/serial/StartCluster 415.16
149 TestMultiControlPlane/serial/DeployApp 752.54
150 TestMultiControlPlane/serial/PingHostFromPods 41.7
151 TestMultiControlPlane/serial/AddWorkerNode 239.53
153 TestMultiControlPlane/serial/HAppyAfterClusterStart 47.02
154 TestMultiControlPlane/serial/CopyFile 61.93
155 TestMultiControlPlane/serial/StopSecondaryNode 93.72
156 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 40.57
157 TestMultiControlPlane/serial/RestartSecondaryNode 196.62
160 TestImageBuild/serial/Setup 210.44
202 TestMountStart/serial/RestartStopped 176.96
207 TestMultiNode/serial/PingHostFrom2Pods 52.5
208 TestMultiNode/serial/AddNode 231.4
211 TestMultiNode/serial/CopyFile 62.79
213 TestMultiNode/serial/StartAfterStop 259.57
214 TestMultiNode/serial/RestartKeepsNodes 277.38
215 TestMultiNode/serial/DeleteNode 32.02
216 TestMultiNode/serial/StopMultiNode 99.46
217 TestMultiNode/serial/RestartMultiNode 324.26
233 TestNoKubernetes/serial/StartWithK8s 299.98
245 TestPause/serial/SecondStartNoReconfiguration 421.05
285 TestStartStop/group/newest-cni/serial/SecondStart 10800.45
x
+
TestAddons/Setup (198.49s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-257600 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p addons-257600 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: exit status 90 (3m18.3805883s)

                                                
                                                
-- stdout --
	* [addons-257600] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "addons-257600" primary control-plane node in "addons-257600" cluster
	* Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 16:21:42.373918   11816 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0416 16:21:42.419810   11816 out.go:291] Setting OutFile to fd 772 ...
	I0416 16:21:42.420361   11816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:21:42.420361   11816 out.go:304] Setting ErrFile to fd 776...
	I0416 16:21:42.420361   11816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:21:42.438993   11816 out.go:298] Setting JSON to false
	I0416 16:21:42.442096   11816 start.go:129] hostinfo: {"hostname":"minikube5","uptime":22132,"bootTime":1713262370,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0416 16:21:42.442264   11816 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0416 16:21:42.443731   11816 out.go:177] * [addons-257600] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0416 16:21:42.443804   11816 notify.go:220] Checking for updates...
	I0416 16:21:42.444599   11816 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:21:42.445191   11816 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 16:21:42.445833   11816 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0416 16:21:42.446420   11816 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 16:21:42.447030   11816 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 16:21:42.447744   11816 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 16:21:47.267241   11816 out.go:177] * Using the hyperv driver based on user configuration
	I0416 16:21:47.267859   11816 start.go:297] selected driver: hyperv
	I0416 16:21:47.267859   11816 start.go:901] validating driver "hyperv" against <nil>
	I0416 16:21:47.267859   11816 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 16:21:47.307569   11816 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 16:21:47.308984   11816 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 16:21:47.309052   11816 cni.go:84] Creating CNI manager for ""
	I0416 16:21:47.309145   11816 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0416 16:21:47.309145   11816 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0416 16:21:47.309261   11816 start.go:340] cluster config:
	{Name:addons-257600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-257600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:21:47.309435   11816 iso.go:125] acquiring lock: {Name:mka3f8eef32f5becd06d05d1d837c2a92a8fa70c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 16:21:47.310761   11816 out.go:177] * Starting "addons-257600" primary control-plane node in "addons-257600" cluster
	I0416 16:21:47.311175   11816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:21:47.311321   11816 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0416 16:21:47.311393   11816 cache.go:56] Caching tarball of preloaded images
	I0416 16:21:47.311775   11816 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 16:21:47.311926   11816 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 16:21:47.312451   11816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-257600\config.json ...
	I0416 16:21:47.312668   11816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-257600\config.json: {Name:mk9f03a501872fe3505d31019eb0526d11c00bff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:21:47.313450   11816 start.go:360] acquireMachinesLock for addons-257600: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:21:47.313450   11816 start.go:364] duration metric: took 0s to acquireMachinesLock for "addons-257600"
	I0416 16:21:47.313450   11816 start.go:93] Provisioning new machine with config: &{Name:addons-257600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:addons-257600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 16:21:47.313975   11816 start.go:125] createHost starting for "" (driver="hyperv")
	I0416 16:21:47.314701   11816 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0416 16:21:47.314701   11816 start.go:159] libmachine.API.Create for "addons-257600" (driver="hyperv")
	I0416 16:21:47.314701   11816 client.go:168] LocalClient.Create starting
	I0416 16:21:47.315262   11816 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0416 16:21:47.545518   11816 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0416 16:21:47.682325   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0416 16:21:49.544992   11816 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0416 16:21:49.544992   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:21:49.545759   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0416 16:21:51.077425   11816 main.go:141] libmachine: [stdout =====>] : False
	
	I0416 16:21:51.077495   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:21:51.077495   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:21:52.428278   11816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:21:52.428278   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:21:52.428488   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:21:55.848714   11816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:21:55.848783   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:21:55.850031   11816 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 16:21:56.154178   11816 main.go:141] libmachine: Creating SSH key...
	I0416 16:21:56.572381   11816 main.go:141] libmachine: Creating VM...
	I0416 16:21:56.572381   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:21:59.071735   11816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:21:59.071811   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:21:59.071871   11816 main.go:141] libmachine: Using switch "Default Switch"
	I0416 16:21:59.071871   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:22:00.610477   11816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:22:00.611481   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:22:00.611666   11816 main.go:141] libmachine: Creating VHD
	I0416 16:22:00.611666   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-257600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0416 16:22:04.054006   11816 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-257600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5C8D573B-63DA-4D2D-9CF4-D38D4D1DC825
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0416 16:22:04.054006   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:22:04.054006   11816 main.go:141] libmachine: Writing magic tar header
	I0416 16:22:04.054908   11816 main.go:141] libmachine: Writing SSH key tar header
	I0416 16:22:04.065472   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-257600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-257600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0416 16:22:07.083456   11816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:22:07.083456   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:22:07.083456   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-257600\disk.vhd' -SizeBytes 20000MB
	I0416 16:22:09.470866   11816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:22:09.470866   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:22:09.470941   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-257600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-257600' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0416 16:22:12.745337   11816 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-257600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0416 16:22:12.745337   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:22:12.745337   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-257600 -DynamicMemoryEnabled $false
	I0416 16:22:14.761020   11816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:22:14.761020   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:22:14.761877   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-257600 -Count 2
	I0416 16:22:16.664580   11816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:22:16.664580   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:22:16.664651   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-257600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-257600\boot2docker.iso'
	I0416 16:22:18.982098   11816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:22:18.982098   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:22:18.982098   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-257600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-257600\disk.vhd'
	I0416 16:22:21.359598   11816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:22:21.360115   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:22:21.360159   11816 main.go:141] libmachine: Starting VM...
	I0416 16:22:21.360159   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-257600
	I0416 16:22:23.909010   11816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:22:23.909010   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:22:23.909010   11816 main.go:141] libmachine: Waiting for host to start...
	I0416 16:22:23.909010   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-257600 ).state
	I0416 16:22:25.946485   11816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:22:25.946860   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:22:25.946939   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-257600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:22:28.202408   11816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:22:28.203040   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:22:29.206945   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-257600 ).state
	I0416 16:22:31.192764   11816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:22:31.192764   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:22:31.193434   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-257600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:22:33.413519   11816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:22:33.413519   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:22:34.426873   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-257600 ).state
	I0416 16:22:36.357434   11816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:22:36.358171   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:22:36.358171   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-257600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:22:38.592161   11816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:22:38.592161   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:22:39.593552   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-257600 ).state
	I0416 16:22:41.567648   11816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:22:41.567648   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:22:41.568378   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-257600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:22:43.793025   11816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:22:43.793025   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:22:44.794153   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-257600 ).state
	I0416 16:22:46.808897   11816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:22:46.808897   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:22:46.809611   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-257600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:22:49.120312   11816 main.go:141] libmachine: [stdout =====>] : 172.19.88.77
	
	I0416 16:22:49.120312   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:22:49.120797   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-257600 ).state
	I0416 16:22:51.033716   11816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:22:51.033716   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:22:51.033716   11816 machine.go:94] provisionDockerMachine start ...
	I0416 16:22:51.034780   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-257600 ).state
	I0416 16:22:53.017358   11816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:22:53.017358   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:22:53.018464   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-257600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:22:55.360338   11816 main.go:141] libmachine: [stdout =====>] : 172.19.88.77
	
	I0416 16:22:55.360338   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:22:55.365094   11816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:22:55.374193   11816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.88.77 22 <nil> <nil>}
	I0416 16:22:55.374264   11816 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 16:22:55.516120   11816 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 16:22:55.516296   11816 buildroot.go:166] provisioning hostname "addons-257600"
	I0416 16:22:55.516498   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-257600 ).state
	I0416 16:22:57.429214   11816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:22:57.429830   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:22:57.429830   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-257600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:22:59.689092   11816 main.go:141] libmachine: [stdout =====>] : 172.19.88.77
	
	I0416 16:22:59.689092   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:22:59.694624   11816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:22:59.695068   11816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.88.77 22 <nil> <nil>}
	I0416 16:22:59.695068   11816 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-257600 && echo "addons-257600" | sudo tee /etc/hostname
	I0416 16:22:59.858577   11816 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-257600
	
	I0416 16:22:59.858750   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-257600 ).state
	I0416 16:23:01.695743   11816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:23:01.695743   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:23:01.696416   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-257600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:23:03.905300   11816 main.go:141] libmachine: [stdout =====>] : 172.19.88.77
	
	I0416 16:23:03.905300   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:23:03.911067   11816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:23:03.911789   11816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.88.77 22 <nil> <nil>}
	I0416 16:23:03.911864   11816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-257600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-257600/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-257600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:23:04.066818   11816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:23:04.066818   11816 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 16:23:04.067029   11816 buildroot.go:174] setting up certificates
	I0416 16:23:04.067029   11816 provision.go:84] configureAuth start
	I0416 16:23:04.067064   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-257600 ).state
	I0416 16:23:05.987882   11816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:23:05.987882   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:23:05.988320   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-257600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:23:08.271865   11816 main.go:141] libmachine: [stdout =====>] : 172.19.88.77
	
	I0416 16:23:08.271865   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:23:08.272632   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-257600 ).state
	I0416 16:23:10.191020   11816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:23:10.191020   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:23:10.191020   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-257600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:23:12.570804   11816 main.go:141] libmachine: [stdout =====>] : 172.19.88.77
	
	I0416 16:23:12.571024   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:23:12.571024   11816 provision.go:143] copyHostCerts
	I0416 16:23:12.571787   11816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 16:23:12.572828   11816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 16:23:12.573673   11816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 16:23:12.574319   11816 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-257600 san=[127.0.0.1 172.19.88.77 addons-257600 localhost minikube]
	I0416 16:23:12.910315   11816 provision.go:177] copyRemoteCerts
	I0416 16:23:12.920966   11816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:23:12.922112   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-257600 ).state
	I0416 16:23:14.821483   11816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:23:14.821483   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:23:14.822369   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-257600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:23:17.079660   11816 main.go:141] libmachine: [stdout =====>] : 172.19.88.77
	
	I0416 16:23:17.079660   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:23:17.079840   11816 sshutil.go:53] new ssh client: &{IP:172.19.88.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-257600\id_rsa Username:docker}
	I0416 16:23:17.182983   11816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.2606302s)
	I0416 16:23:17.184005   11816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 16:23:17.225537   11816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0416 16:23:17.265370   11816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 16:23:17.309132   11816 provision.go:87] duration metric: took 13.2412514s to configureAuth
	I0416 16:23:17.309202   11816 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:23:17.309594   11816 config.go:182] Loaded profile config "addons-257600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:23:17.309634   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-257600 ).state
	I0416 16:23:19.182599   11816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:23:19.182892   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:23:19.182981   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-257600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:23:21.385528   11816 main.go:141] libmachine: [stdout =====>] : 172.19.88.77
	
	I0416 16:23:21.385528   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:23:21.390141   11816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:23:21.390545   11816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.88.77 22 <nil> <nil>}
	I0416 16:23:21.390545   11816 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 16:23:21.521889   11816 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 16:23:21.521889   11816 buildroot.go:70] root file system type: tmpfs
	I0416 16:23:21.521889   11816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 16:23:21.521889   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-257600 ).state
	I0416 16:23:23.462039   11816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:23:23.463052   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:23:23.463236   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-257600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:23:25.756476   11816 main.go:141] libmachine: [stdout =====>] : 172.19.88.77
	
	I0416 16:23:25.757509   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:23:25.763068   11816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:23:25.763612   11816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.88.77 22 <nil> <nil>}
	I0416 16:23:25.763713   11816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 16:23:25.918327   11816 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 16:23:25.918327   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-257600 ).state
	I0416 16:23:27.804755   11816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:23:27.804755   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:23:27.804755   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-257600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:23:30.018920   11816 main.go:141] libmachine: [stdout =====>] : 172.19.88.77
	
	I0416 16:23:30.018920   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:23:30.022577   11816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:23:30.022880   11816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.88.77 22 <nil> <nil>}
	I0416 16:23:30.022880   11816 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 16:23:31.951756   11816 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 16:23:31.951756   11816 machine.go:97] duration metric: took 40.9147545s to provisionDockerMachine
	I0416 16:23:31.951756   11816 client.go:171] duration metric: took 1m44.6311281s to LocalClient.Create
	I0416 16:23:31.951756   11816 start.go:167] duration metric: took 1m44.6311281s to libmachine.API.Create "addons-257600"
	I0416 16:23:31.951756   11816 start.go:293] postStartSetup for "addons-257600" (driver="hyperv")
	I0416 16:23:31.951756   11816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:23:31.961087   11816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:23:31.961087   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-257600 ).state
	I0416 16:23:33.857817   11816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:23:33.857888   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:23:33.857888   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-257600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:23:36.099673   11816 main.go:141] libmachine: [stdout =====>] : 172.19.88.77
	
	I0416 16:23:36.099673   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:23:36.100770   11816 sshutil.go:53] new ssh client: &{IP:172.19.88.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-257600\id_rsa Username:docker}
	I0416 16:23:36.211073   11816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2497459s)
	I0416 16:23:36.219590   11816 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:23:36.227693   11816 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:23:36.227693   11816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 16:23:36.228403   11816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 16:23:36.228767   11816 start.go:296] duration metric: took 4.2767688s for postStartSetup
	I0416 16:23:36.232262   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-257600 ).state
	I0416 16:23:38.146755   11816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:23:38.146864   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:23:38.146942   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-257600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:23:40.463966   11816 main.go:141] libmachine: [stdout =====>] : 172.19.88.77
	
	I0416 16:23:40.463966   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:23:40.464517   11816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-257600\config.json ...
	I0416 16:23:40.467067   11816 start.go:128] duration metric: took 1m53.1466832s to createHost
	I0416 16:23:40.467067   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-257600 ).state
	I0416 16:23:42.345983   11816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:23:42.345983   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:23:42.347097   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-257600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:23:44.611617   11816 main.go:141] libmachine: [stdout =====>] : 172.19.88.77
	
	I0416 16:23:44.612436   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:23:44.617962   11816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:23:44.617962   11816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.88.77 22 <nil> <nil>}
	I0416 16:23:44.617962   11816 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0416 16:23:44.758221   11816 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713284624.928867691
	
	I0416 16:23:44.758221   11816 fix.go:216] guest clock: 1713284624.928867691
	I0416 16:23:44.758221   11816 fix.go:229] Guest: 2024-04-16 16:23:44.928867691 +0000 UTC Remote: 2024-04-16 16:23:40.4670676 +0000 UTC m=+118.170225901 (delta=4.461800091s)
	I0416 16:23:44.758221   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-257600 ).state
	I0416 16:23:46.695902   11816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:23:46.695902   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:23:46.695902   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-257600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:23:48.991519   11816 main.go:141] libmachine: [stdout =====>] : 172.19.88.77
	
	I0416 16:23:48.991519   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:23:48.996730   11816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:23:48.996730   11816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.88.77 22 <nil> <nil>}
	I0416 16:23:48.996730   11816 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713284624
	I0416 16:23:49.153176   11816 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 16:23:44 UTC 2024
	
	I0416 16:23:49.153176   11816 fix.go:236] clock set: Tue Apr 16 16:23:44 UTC 2024
	 (err=<nil>)
	I0416 16:23:49.153176   11816 start.go:83] releasing machines lock for "addons-257600", held for 2m1.8328254s
	I0416 16:23:49.153176   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-257600 ).state
	I0416 16:23:51.041769   11816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:23:51.041769   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:23:51.041769   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-257600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:23:53.396115   11816 main.go:141] libmachine: [stdout =====>] : 172.19.88.77
	
	I0416 16:23:53.396115   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:23:53.399693   11816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:23:53.399848   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-257600 ).state
	I0416 16:23:53.407641   11816 ssh_runner.go:195] Run: cat /version.json
	I0416 16:23:53.407699   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-257600 ).state
	I0416 16:23:55.369531   11816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:23:55.369603   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:23:55.369603   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-257600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:23:55.388582   11816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:23:55.388582   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:23:55.389203   11816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-257600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:23:57.736568   11816 main.go:141] libmachine: [stdout =====>] : 172.19.88.77
	
	I0416 16:23:57.736568   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:23:57.737261   11816 sshutil.go:53] new ssh client: &{IP:172.19.88.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-257600\id_rsa Username:docker}
	I0416 16:23:57.768488   11816 main.go:141] libmachine: [stdout =====>] : 172.19.88.77
	
	I0416 16:23:57.768488   11816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:23:57.768564   11816 sshutil.go:53] new ssh client: &{IP:172.19.88.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-257600\id_rsa Username:docker}
	I0416 16:23:57.841016   11816 ssh_runner.go:235] Completed: cat /version.json: (4.4330841s)
	I0416 16:23:57.851830   11816 ssh_runner.go:195] Run: systemctl --version
	I0416 16:23:57.919757   11816 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5196799s)
	I0416 16:23:57.930665   11816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 16:23:57.939070   11816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:23:57.947594   11816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:23:57.974317   11816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 16:23:57.974317   11816 start.go:494] detecting cgroup driver to use...
	I0416 16:23:57.974536   11816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:23:58.013422   11816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 16:23:58.039537   11816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 16:23:58.057738   11816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 16:23:58.067391   11816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 16:23:58.092344   11816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:23:58.118841   11816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 16:23:58.145878   11816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:23:58.173689   11816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:23:58.206805   11816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 16:23:58.236994   11816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 16:23:58.269830   11816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 16:23:58.295970   11816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:23:58.321074   11816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:23:58.348425   11816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:23:58.531500   11816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 16:23:58.563908   11816 start.go:494] detecting cgroup driver to use...
	I0416 16:23:58.576727   11816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 16:23:58.605714   11816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:23:58.633745   11816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 16:23:58.679013   11816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:23:58.715450   11816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:23:58.748164   11816 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 16:23:58.793977   11816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:23:58.816396   11816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:23:58.857748   11816 ssh_runner.go:195] Run: which cri-dockerd
	I0416 16:23:58.873049   11816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 16:23:58.890434   11816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 16:23:58.932636   11816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 16:23:59.112832   11816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 16:23:59.265734   11816 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 16:23:59.265734   11816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 16:23:59.304198   11816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:23:59.473605   11816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 16:25:00.587371   11816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1103066s)
	I0416 16:25:00.595271   11816 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0416 16:25:00.622818   11816 out.go:177] 
	W0416 16:25:00.623250   11816 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 16 16:23:30 addons-257600 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 16:23:30 addons-257600 dockerd[670]: time="2024-04-16T16:23:30.749326486Z" level=info msg="Starting up"
	Apr 16 16:23:30 addons-257600 dockerd[670]: time="2024-04-16T16:23:30.750241271Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 16:23:30 addons-257600 dockerd[670]: time="2024-04-16T16:23:30.751787784Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=676
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.781706434Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.803424027Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.803591260Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.803828508Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.803916926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.804000043Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.804090962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.804330410Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.804440632Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.804460536Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.804472139Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.804634371Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.805015649Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.807826617Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.807929938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.808084569Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.808194692Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.808282509Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.808332919Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.808406334Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.817238321Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.817344842Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.817362946Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.817375948Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.817388251Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.817477169Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.817951165Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818123600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818216118Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818229521Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818243124Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818253226Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818262428Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818273030Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818284432Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818294534Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818304436Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818314838Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818330041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818340743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818350946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818360447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818369349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818379351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818388253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818399855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818409257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818420860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818429061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818438263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818447965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818464168Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818531482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818543685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818552986Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818583993Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818595195Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818603597Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818611598Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818912959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818996976Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.819007878Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.819177013Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.819281134Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.819384455Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.819399658Z" level=info msg="containerd successfully booted in 0.039112s"
	Apr 16 16:23:31 addons-257600 dockerd[670]: time="2024-04-16T16:23:31.801036438Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 16:23:31 addons-257600 dockerd[670]: time="2024-04-16T16:23:31.818560803Z" level=info msg="Loading containers: start."
	Apr 16 16:23:32 addons-257600 dockerd[670]: time="2024-04-16T16:23:32.041272460Z" level=info msg="Loading containers: done."
	Apr 16 16:23:32 addons-257600 dockerd[670]: time="2024-04-16T16:23:32.059396888Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 16:23:32 addons-257600 dockerd[670]: time="2024-04-16T16:23:32.059524600Z" level=info msg="Daemon has completed initialization"
	Apr 16 16:23:32 addons-257600 dockerd[670]: time="2024-04-16T16:23:32.119555823Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 16:23:32 addons-257600 systemd[1]: Started Docker Application Container Engine.
	Apr 16 16:23:32 addons-257600 dockerd[670]: time="2024-04-16T16:23:32.121832240Z" level=info msg="API listen on [::]:2376"
	Apr 16 16:23:59 addons-257600 dockerd[670]: time="2024-04-16T16:23:59.668538845Z" level=info msg="Processing signal 'terminated'"
	Apr 16 16:23:59 addons-257600 dockerd[670]: time="2024-04-16T16:23:59.670180911Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 16:23:59 addons-257600 dockerd[670]: time="2024-04-16T16:23:59.670691232Z" level=info msg="Daemon shutdown complete"
	Apr 16 16:23:59 addons-257600 dockerd[670]: time="2024-04-16T16:23:59.670883439Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 16:23:59 addons-257600 dockerd[670]: time="2024-04-16T16:23:59.670916041Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 16:23:59 addons-257600 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 16:24:00 addons-257600 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 16:24:00 addons-257600 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 16:24:00 addons-257600 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 16:24:00 addons-257600 dockerd[1020]: time="2024-04-16T16:24:00.738036609Z" level=info msg="Starting up"
	Apr 16 16:25:00 addons-257600 dockerd[1020]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 16 16:25:00 addons-257600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 16 16:25:00 addons-257600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 16 16:25:00 addons-257600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 16 16:23:30 addons-257600 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 16:23:30 addons-257600 dockerd[670]: time="2024-04-16T16:23:30.749326486Z" level=info msg="Starting up"
	Apr 16 16:23:30 addons-257600 dockerd[670]: time="2024-04-16T16:23:30.750241271Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 16:23:30 addons-257600 dockerd[670]: time="2024-04-16T16:23:30.751787784Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=676
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.781706434Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.803424027Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.803591260Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.803828508Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.803916926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.804000043Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.804090962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.804330410Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.804440632Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.804460536Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.804472139Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.804634371Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.805015649Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.807826617Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.807929938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.808084569Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.808194692Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.808282509Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.808332919Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.808406334Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.817238321Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.817344842Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.817362946Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.817375948Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.817388251Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.817477169Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.817951165Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818123600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818216118Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818229521Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818243124Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818253226Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818262428Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818273030Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818284432Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818294534Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818304436Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818314838Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818330041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818340743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818350946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818360447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818369349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818379351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818388253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818399855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818409257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818420860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818429061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818438263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818447965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818464168Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818531482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818543685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818552986Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818583993Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818595195Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818603597Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818611598Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818912959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.818996976Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.819007878Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.819177013Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.819281134Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.819384455Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 16:23:30 addons-257600 dockerd[676]: time="2024-04-16T16:23:30.819399658Z" level=info msg="containerd successfully booted in 0.039112s"
	Apr 16 16:23:31 addons-257600 dockerd[670]: time="2024-04-16T16:23:31.801036438Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 16:23:31 addons-257600 dockerd[670]: time="2024-04-16T16:23:31.818560803Z" level=info msg="Loading containers: start."
	Apr 16 16:23:32 addons-257600 dockerd[670]: time="2024-04-16T16:23:32.041272460Z" level=info msg="Loading containers: done."
	Apr 16 16:23:32 addons-257600 dockerd[670]: time="2024-04-16T16:23:32.059396888Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 16:23:32 addons-257600 dockerd[670]: time="2024-04-16T16:23:32.059524600Z" level=info msg="Daemon has completed initialization"
	Apr 16 16:23:32 addons-257600 dockerd[670]: time="2024-04-16T16:23:32.119555823Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 16:23:32 addons-257600 systemd[1]: Started Docker Application Container Engine.
	Apr 16 16:23:32 addons-257600 dockerd[670]: time="2024-04-16T16:23:32.121832240Z" level=info msg="API listen on [::]:2376"
	Apr 16 16:23:59 addons-257600 dockerd[670]: time="2024-04-16T16:23:59.668538845Z" level=info msg="Processing signal 'terminated'"
	Apr 16 16:23:59 addons-257600 dockerd[670]: time="2024-04-16T16:23:59.670180911Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 16:23:59 addons-257600 dockerd[670]: time="2024-04-16T16:23:59.670691232Z" level=info msg="Daemon shutdown complete"
	Apr 16 16:23:59 addons-257600 dockerd[670]: time="2024-04-16T16:23:59.670883439Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 16:23:59 addons-257600 dockerd[670]: time="2024-04-16T16:23:59.670916041Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 16:23:59 addons-257600 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 16:24:00 addons-257600 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 16:24:00 addons-257600 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 16:24:00 addons-257600 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 16:24:00 addons-257600 dockerd[1020]: time="2024-04-16T16:24:00.738036609Z" level=info msg="Starting up"
	Apr 16 16:25:00 addons-257600 dockerd[1020]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 16 16:25:00 addons-257600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 16 16:25:00 addons-257600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 16 16:25:00 addons-257600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0416 16:25:00.623912   11816 out.go:239] * 
	* 
	W0416 16:25:00.625068   11816 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 16:25:00.625657   11816 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:111: out/minikube-windows-amd64.exe start -p addons-257600 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: exit status 90
--- FAIL: TestAddons/Setup (198.49s)

                                                
                                    
x
+
TestForceSystemdEnv (423.98s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-124600 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
docker_test.go:155: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p force-systemd-env-124600 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: exit status 90 (4m49.0977765s)

                                                
                                                
-- stdout --
	* [force-systemd-env-124600] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the hyperv driver based on user configuration
	* Starting "force-systemd-env-124600" primary control-plane node in "force-systemd-env-124600" cluster
	* Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 19:03:50.669694    8700 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0416 19:03:50.720700    8700 out.go:291] Setting OutFile to fd 1832 ...
	I0416 19:03:50.720700    8700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 19:03:50.720700    8700 out.go:304] Setting ErrFile to fd 1112...
	I0416 19:03:50.720700    8700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 19:03:50.741481    8700 out.go:298] Setting JSON to false
	I0416 19:03:50.744472    8700 start.go:129] hostinfo: {"hostname":"minikube5","uptime":31860,"bootTime":1713262370,"procs":207,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0416 19:03:50.744472    8700 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0416 19:03:50.745477    8700 out.go:177] * [force-systemd-env-124600] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0416 19:03:50.746483    8700 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 19:03:50.746483    8700 notify.go:220] Checking for updates...
	I0416 19:03:50.747479    8700 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0416 19:03:50.747479    8700 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 19:03:50.748482    8700 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 19:03:50.749505    8700 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0416 19:03:50.750481    8700 config.go:182] Loaded profile config "cert-expiration-396200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 19:03:50.750481    8700 config.go:182] Loaded profile config "cert-options-104100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 19:03:50.751480    8700 config.go:182] Loaded profile config "docker-flags-442400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 19:03:50.751480    8700 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 19:03:55.972949    8700 out.go:177] * Using the hyperv driver based on user configuration
	I0416 19:03:55.974149    8700 start.go:297] selected driver: hyperv
	I0416 19:03:55.974213    8700 start.go:901] validating driver "hyperv" against <nil>
	I0416 19:03:55.974213    8700 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 19:03:56.018003    8700 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 19:03:56.019140    8700 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0416 19:03:56.019262    8700 cni.go:84] Creating CNI manager for ""
	I0416 19:03:56.019262    8700 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0416 19:03:56.019336    8700 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0416 19:03:56.019534    8700 start.go:340] cluster config:
	{Name:force-systemd-env-124600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-env-124600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:
1m0s}
	I0416 19:03:56.019790    8700 iso.go:125] acquiring lock: {Name:mka3f8eef32f5becd06d05d1d837c2a92a8fa70c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 19:03:56.021034    8700 out.go:177] * Starting "force-systemd-env-124600" primary control-plane node in "force-systemd-env-124600" cluster
	I0416 19:03:56.021734    8700 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 19:03:56.021903    8700 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0416 19:03:56.021968    8700 cache.go:56] Caching tarball of preloaded images
	I0416 19:03:56.022027    8700 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 19:03:56.022027    8700 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 19:03:56.022578    8700 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\force-systemd-env-124600\config.json ...
	I0416 19:03:56.022874    8700 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\force-systemd-env-124600\config.json: {Name:mk256f6acc1258266a3a65c808ca9964c54b0ef3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 19:03:56.023583    8700 start.go:360] acquireMachinesLock for force-systemd-env-124600: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 19:05:19.150424    8700 start.go:364] duration metric: took 1m23.122052s to acquireMachinesLock for "force-systemd-env-124600"
	I0416 19:05:19.150678    8700 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-124600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-env-124600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 19:05:19.150819    8700 start.go:125] createHost starting for "" (driver="hyperv")
	I0416 19:05:19.151993    8700 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0416 19:05:19.152429    8700 start.go:159] libmachine.API.Create for "force-systemd-env-124600" (driver="hyperv")
	I0416 19:05:19.152429    8700 client.go:168] LocalClient.Create starting
	I0416 19:05:19.153050    8700 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0416 19:05:19.153290    8700 main.go:141] libmachine: Decoding PEM data...
	I0416 19:05:19.153389    8700 main.go:141] libmachine: Parsing certificate...
	I0416 19:05:19.153605    8700 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0416 19:05:19.153780    8700 main.go:141] libmachine: Decoding PEM data...
	I0416 19:05:19.153858    8700 main.go:141] libmachine: Parsing certificate...
	I0416 19:05:19.153923    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0416 19:05:20.929217    8700 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0416 19:05:20.929430    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:05:20.929506    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0416 19:05:22.598149    8700 main.go:141] libmachine: [stdout =====>] : False
	
	I0416 19:05:22.598641    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:05:22.598705    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 19:05:24.015628    8700 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 19:05:24.016533    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:05:24.016722    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 19:05:27.502134    8700 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 19:05:27.502167    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:05:27.504558    8700 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 19:05:27.823840    8700 main.go:141] libmachine: Creating SSH key...
	I0416 19:05:27.889319    8700 main.go:141] libmachine: Creating VM...
	I0416 19:05:27.889319    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 19:05:30.624014    8700 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 19:05:30.624850    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:05:30.624850    8700 main.go:141] libmachine: Using switch "Default Switch"
	I0416 19:05:30.624850    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 19:05:32.211264    8700 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 19:05:32.211264    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:05:32.211361    8700 main.go:141] libmachine: Creating VHD
	I0416 19:05:32.211440    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\force-systemd-env-124600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0416 19:05:35.976953    8700 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\force-systemd-env-124600\f
	                          ixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : EFDB5DC9-0ADE-46B6-84E0-26F3F4300CB1
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0416 19:05:35.977032    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:05:35.977032    8700 main.go:141] libmachine: Writing magic tar header
	I0416 19:05:35.977108    8700 main.go:141] libmachine: Writing SSH key tar header
	I0416 19:05:35.984751    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\force-systemd-env-124600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\force-systemd-env-124600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0416 19:05:39.070071    8700 main.go:141] libmachine: [stdout =====>] : 
	I0416 19:05:39.070071    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:05:39.070359    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\force-systemd-env-124600\disk.vhd' -SizeBytes 20000MB
	I0416 19:05:41.462715    8700 main.go:141] libmachine: [stdout =====>] : 
	I0416 19:05:41.462715    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:05:41.463387    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM force-systemd-env-124600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\force-systemd-env-124600' -SwitchName 'Default Switch' -MemoryStartupBytes 2048MB
	I0416 19:05:45.988663    8700 main.go:141] libmachine: [stdout =====>] : 
	Name                     State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                     ----- ----------- ----------------- ------   ------             -------
	force-systemd-env-124600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0416 19:05:45.989396    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:05:45.989396    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName force-systemd-env-124600 -DynamicMemoryEnabled $false
	I0416 19:05:48.095202    8700 main.go:141] libmachine: [stdout =====>] : 
	I0416 19:05:48.095202    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:05:48.095202    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor force-systemd-env-124600 -Count 2
	I0416 19:05:50.177315    8700 main.go:141] libmachine: [stdout =====>] : 
	I0416 19:05:50.177315    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:05:50.177973    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName force-systemd-env-124600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\force-systemd-env-124600\boot2docker.iso'
	I0416 19:05:52.546270    8700 main.go:141] libmachine: [stdout =====>] : 
	I0416 19:05:52.546803    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:05:52.546803    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName force-systemd-env-124600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\force-systemd-env-124600\disk.vhd'
	I0416 19:05:54.957398    8700 main.go:141] libmachine: [stdout =====>] : 
	I0416 19:05:54.957548    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:05:54.957548    8700 main.go:141] libmachine: Starting VM...
	I0416 19:05:54.957610    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM force-systemd-env-124600
	I0416 19:05:57.603018    8700 main.go:141] libmachine: [stdout =====>] : 
	I0416 19:05:57.603293    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:05:57.607275    8700 main.go:141] libmachine: Waiting for host to start...
	I0416 19:05:57.607275    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-124600 ).state
	I0416 19:05:59.698276    8700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 19:05:59.698276    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:05:59.698276    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-124600 ).networkadapters[0]).ipaddresses[0]
	I0416 19:06:02.062619    8700 main.go:141] libmachine: [stdout =====>] : 
	I0416 19:06:02.062619    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:06:03.075062    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-124600 ).state
	I0416 19:06:05.402734    8700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 19:06:05.402734    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:06:05.402734    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-124600 ).networkadapters[0]).ipaddresses[0]
	I0416 19:06:08.052391    8700 main.go:141] libmachine: [stdout =====>] : 
	I0416 19:06:08.052444    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:06:09.055463    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-124600 ).state
	I0416 19:06:11.143114    8700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 19:06:11.143114    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:06:11.143114    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-124600 ).networkadapters[0]).ipaddresses[0]
	I0416 19:06:13.536018    8700 main.go:141] libmachine: [stdout =====>] : 
	I0416 19:06:13.536018    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:06:14.547531    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-124600 ).state
	I0416 19:06:16.700621    8700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 19:06:16.700621    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:06:16.700621    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-124600 ).networkadapters[0]).ipaddresses[0]
	I0416 19:06:19.102903    8700 main.go:141] libmachine: [stdout =====>] : 
	I0416 19:06:19.103129    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:06:20.112374    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-124600 ).state
	I0416 19:06:22.267784    8700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 19:06:22.267784    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:06:22.268107    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-124600 ).networkadapters[0]).ipaddresses[0]
	I0416 19:06:24.728147    8700 main.go:141] libmachine: [stdout =====>] : 172.19.92.94
	
	I0416 19:06:24.728147    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:06:24.729169    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-124600 ).state
	I0416 19:06:26.729364    8700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 19:06:26.729976    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:06:26.729976    8700 machine.go:94] provisionDockerMachine start ...
	I0416 19:06:26.729976    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-124600 ).state
	I0416 19:06:28.749841    8700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 19:06:28.749841    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:06:28.749841    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-124600 ).networkadapters[0]).ipaddresses[0]
	I0416 19:06:31.147487    8700 main.go:141] libmachine: [stdout =====>] : 172.19.92.94
	
	I0416 19:06:31.147487    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:06:31.151503    8700 main.go:141] libmachine: Using SSH client type: native
	I0416 19:06:31.152003    8700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.92.94 22 <nil> <nil>}
	I0416 19:06:31.152003    8700 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 19:06:31.298567    8700 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 19:06:31.298567    8700 buildroot.go:166] provisioning hostname "force-systemd-env-124600"
	I0416 19:06:31.298567    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-124600 ).state
	I0416 19:06:33.300455    8700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 19:06:33.300455    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:06:33.300455    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-124600 ).networkadapters[0]).ipaddresses[0]
	I0416 19:06:35.632545    8700 main.go:141] libmachine: [stdout =====>] : 172.19.92.94
	
	I0416 19:06:35.632545    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:06:35.636585    8700 main.go:141] libmachine: Using SSH client type: native
	I0416 19:06:35.637171    8700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.92.94 22 <nil> <nil>}
	I0416 19:06:35.637171    8700 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-124600 && echo "force-systemd-env-124600" | sudo tee /etc/hostname
	I0416 19:06:35.798493    8700 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-124600
	
	I0416 19:06:35.798493    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-124600 ).state
	I0416 19:06:37.793518    8700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 19:06:37.793518    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:06:37.793606    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-124600 ).networkadapters[0]).ipaddresses[0]
	I0416 19:06:40.145879    8700 main.go:141] libmachine: [stdout =====>] : 172.19.92.94
	
	I0416 19:06:40.145879    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:06:40.152443    8700 main.go:141] libmachine: Using SSH client type: native
	I0416 19:06:40.152984    8700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.92.94 22 <nil> <nil>}
	I0416 19:06:40.152984    8700 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-124600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-124600/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-124600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 19:06:40.305032    8700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 19:06:40.305032    8700 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 19:06:40.305032    8700 buildroot.go:174] setting up certificates
	I0416 19:06:40.305032    8700 provision.go:84] configureAuth start
	I0416 19:06:40.305634    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-124600 ).state
	I0416 19:06:42.314528    8700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 19:06:42.314613    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:06:42.314669    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-124600 ).networkadapters[0]).ipaddresses[0]
	I0416 19:06:44.698823    8700 main.go:141] libmachine: [stdout =====>] : 172.19.92.94
	
	I0416 19:06:44.698823    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:06:44.699900    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-124600 ).state
	I0416 19:06:46.721811    8700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 19:06:46.722458    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:06:46.722601    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-124600 ).networkadapters[0]).ipaddresses[0]
	I0416 19:06:49.073342    8700 main.go:141] libmachine: [stdout =====>] : 172.19.92.94
	
	I0416 19:06:49.073342    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:06:49.073342    8700 provision.go:143] copyHostCerts
	I0416 19:06:49.073342    8700 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 19:06:49.073342    8700 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 19:06:49.073342    8700 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 19:06:49.074348    8700 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 19:06:49.074348    8700 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 19:06:49.075344    8700 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 19:06:49.075344    8700 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 19:06:49.075344    8700 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 19:06:49.075344    8700 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 19:06:49.076345    8700 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 19:06:49.076345    8700 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 19:06:49.076345    8700 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 19:06:49.076345    8700 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.force-systemd-env-124600 san=[127.0.0.1 172.19.92.94 force-systemd-env-124600 localhost minikube]
	I0416 19:06:49.221517    8700 provision.go:177] copyRemoteCerts
	I0416 19:06:49.233518    8700 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 19:06:49.233518    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-124600 ).state
	I0416 19:06:51.151005    8700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 19:06:51.151471    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:06:51.151536    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-124600 ).networkadapters[0]).ipaddresses[0]
	I0416 19:06:53.540797    8700 main.go:141] libmachine: [stdout =====>] : 172.19.92.94
	
	I0416 19:06:53.540797    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:06:53.540797    8700 sshutil.go:53] new ssh client: &{IP:172.19.92.94 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\force-systemd-env-124600\id_rsa Username:docker}
	I0416 19:06:53.649436    8700 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4156673s)
	I0416 19:06:53.649436    8700 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 19:06:53.649436    8700 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 19:06:53.696830    8700 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 19:06:53.696830    8700 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0416 19:06:53.743343    8700 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 19:06:53.743343    8700 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 19:06:53.789514    8700 provision.go:87] duration metric: took 13.4837162s to configureAuth
	I0416 19:06:53.789514    8700 buildroot.go:189] setting minikube options for container-runtime
	I0416 19:06:53.790134    8700 config.go:182] Loaded profile config "force-systemd-env-124600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 19:06:53.790134    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-124600 ).state
	I0416 19:06:55.764215    8700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 19:06:55.764215    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:06:55.764319    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-124600 ).networkadapters[0]).ipaddresses[0]
	I0416 19:06:58.178899    8700 main.go:141] libmachine: [stdout =====>] : 172.19.92.94
	
	I0416 19:06:58.179656    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:06:58.185424    8700 main.go:141] libmachine: Using SSH client type: native
	I0416 19:06:58.186112    8700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.92.94 22 <nil> <nil>}
	I0416 19:06:58.186112    8700 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 19:06:58.340249    8700 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 19:06:58.340249    8700 buildroot.go:70] root file system type: tmpfs
	I0416 19:06:58.340895    8700 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 19:06:58.340961    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-124600 ).state
	I0416 19:07:00.335154    8700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 19:07:00.335154    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:07:00.335154    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-124600 ).networkadapters[0]).ipaddresses[0]
	I0416 19:07:02.662338    8700 main.go:141] libmachine: [stdout =====>] : 172.19.92.94
	
	I0416 19:07:02.662411    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:07:02.665938    8700 main.go:141] libmachine: Using SSH client type: native
	I0416 19:07:02.666343    8700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.92.94 22 <nil> <nil>}
	I0416 19:07:02.666444    8700 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 19:07:02.832871    8700 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 19:07:02.832871    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-124600 ).state
	I0416 19:07:04.831018    8700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 19:07:04.831018    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:07:04.831911    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-124600 ).networkadapters[0]).ipaddresses[0]
	I0416 19:07:07.279926    8700 main.go:141] libmachine: [stdout =====>] : 172.19.92.94
	
	I0416 19:07:07.279926    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:07:07.285147    8700 main.go:141] libmachine: Using SSH client type: native
	I0416 19:07:07.285760    8700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.92.94 22 <nil> <nil>}
	I0416 19:07:07.285760    8700 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 19:07:09.574618    8700 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 19:07:09.574648    8700 machine.go:97] duration metric: took 42.8422389s to provisionDockerMachine
	I0416 19:07:09.574648    8700 client.go:171] duration metric: took 1m50.4158331s to LocalClient.Create
	I0416 19:07:09.574648    8700 start.go:167] duration metric: took 1m50.4159469s to libmachine.API.Create "force-systemd-env-124600"
	I0416 19:07:09.574648    8700 start.go:293] postStartSetup for "force-systemd-env-124600" (driver="hyperv")
	I0416 19:07:09.574648    8700 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 19:07:09.586595    8700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 19:07:09.586595    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-124600 ).state
	I0416 19:07:11.732384    8700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 19:07:11.732571    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:07:11.732639    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-124600 ).networkadapters[0]).ipaddresses[0]
	I0416 19:07:14.153658    8700 main.go:141] libmachine: [stdout =====>] : 172.19.92.94
	
	I0416 19:07:14.153708    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:07:14.154077    8700 sshutil.go:53] new ssh client: &{IP:172.19.92.94 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\force-systemd-env-124600\id_rsa Username:docker}
	I0416 19:07:14.269069    8700 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6822077s)
	I0416 19:07:14.277404    8700 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 19:07:14.284712    8700 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 19:07:14.284712    8700 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 19:07:14.285248    8700 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 19:07:14.286014    8700 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 19:07:14.286014    8700 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 19:07:14.296465    8700 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 19:07:14.313852    8700 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 19:07:14.357572    8700 start.go:296] duration metric: took 4.7826524s for postStartSetup
	I0416 19:07:14.360220    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-124600 ).state
	I0416 19:07:16.497386    8700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 19:07:16.497460    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:07:16.497613    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-124600 ).networkadapters[0]).ipaddresses[0]
	I0416 19:07:18.912677    8700 main.go:141] libmachine: [stdout =====>] : 172.19.92.94
	
	I0416 19:07:18.912971    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:07:18.913177    8700 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\force-systemd-env-124600\config.json ...
	I0416 19:07:18.916047    8700 start.go:128] duration metric: took 1m59.7584254s to createHost
	I0416 19:07:18.916120    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-124600 ).state
	I0416 19:07:20.854220    8700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 19:07:20.854220    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:07:20.855397    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-124600 ).networkadapters[0]).ipaddresses[0]
	I0416 19:07:23.142222    8700 main.go:141] libmachine: [stdout =====>] : 172.19.92.94
	
	I0416 19:07:23.142222    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:07:23.145833    8700 main.go:141] libmachine: Using SSH client type: native
	I0416 19:07:23.146478    8700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.92.94 22 <nil> <nil>}
	I0416 19:07:23.146478    8700 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0416 19:07:23.286753    8700 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713294443.454342666
	
	I0416 19:07:23.286753    8700 fix.go:216] guest clock: 1713294443.454342666
	I0416 19:07:23.286916    8700 fix.go:229] Guest: 2024-04-16 19:07:23.454342666 +0000 UTC Remote: 2024-04-16 19:07:18.9161206 +0000 UTC m=+208.315899001 (delta=4.538222066s)
	I0416 19:07:23.287015    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-124600 ).state
	I0416 19:07:25.201774    8700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 19:07:25.202792    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:07:25.202821    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-124600 ).networkadapters[0]).ipaddresses[0]
	I0416 19:07:27.498997    8700 main.go:141] libmachine: [stdout =====>] : 172.19.92.94
	
	I0416 19:07:27.498997    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:07:27.504619    8700 main.go:141] libmachine: Using SSH client type: native
	I0416 19:07:27.505348    8700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.92.94 22 <nil> <nil>}
	I0416 19:07:27.505348    8700 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713294443
	I0416 19:07:27.650534    8700 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 19:07:23 UTC 2024
	
	I0416 19:07:27.650534    8700 fix.go:236] clock set: Tue Apr 16 19:07:23 UTC 2024
	 (err=<nil>)
	I0416 19:07:27.650534    8700 start.go:83] releasing machines lock for "force-systemd-env-124600", held for 2m8.4927543s
	I0416 19:07:27.651356    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-124600 ).state
	I0416 19:07:29.683748    8700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 19:07:29.683748    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:07:29.683748    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-124600 ).networkadapters[0]).ipaddresses[0]
	I0416 19:07:32.053812    8700 main.go:141] libmachine: [stdout =====>] : 172.19.92.94
	
	I0416 19:07:32.054764    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:07:32.060073    8700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 19:07:32.060433    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-124600 ).state
	I0416 19:07:32.067532    8700 ssh_runner.go:195] Run: cat /version.json
	I0416 19:07:32.067532    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-124600 ).state
	I0416 19:07:34.137708    8700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 19:07:34.137777    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:07:34.137946    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-124600 ).networkadapters[0]).ipaddresses[0]
	I0416 19:07:34.138034    8700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 19:07:34.138034    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:07:34.138140    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-124600 ).networkadapters[0]).ipaddresses[0]
	I0416 19:07:36.617892    8700 main.go:141] libmachine: [stdout =====>] : 172.19.92.94
	
	I0416 19:07:36.617892    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:07:36.618616    8700 sshutil.go:53] new ssh client: &{IP:172.19.92.94 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\force-systemd-env-124600\id_rsa Username:docker}
	I0416 19:07:36.669088    8700 main.go:141] libmachine: [stdout =====>] : 172.19.92.94
	
	I0416 19:07:36.669088    8700 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:07:36.669693    8700 sshutil.go:53] new ssh client: &{IP:172.19.92.94 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\force-systemd-env-124600\id_rsa Username:docker}
	I0416 19:07:36.780572    8700 ssh_runner.go:235] Completed: cat /version.json: (4.7127727s)
	I0416 19:07:36.780572    8700 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.7201095s)
	I0416 19:07:36.791498    8700 ssh_runner.go:195] Run: systemctl --version
	I0416 19:07:36.810144    8700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 19:07:36.819453    8700 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 19:07:36.828432    8700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 19:07:36.856131    8700 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 19:07:36.856131    8700 start.go:494] detecting cgroup driver to use...
	I0416 19:07:36.856131    8700 start.go:498] using "systemd" cgroup driver as enforced via flags
	I0416 19:07:36.856131    8700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 19:07:36.900823    8700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 19:07:36.928686    8700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 19:07:36.946548    8700 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0416 19:07:36.955278    8700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0416 19:07:36.981663    8700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 19:07:37.010699    8700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 19:07:37.038762    8700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 19:07:37.068161    8700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 19:07:37.096809    8700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 19:07:37.123333    8700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 19:07:37.151463    8700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 19:07:37.178604    8700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 19:07:37.208380    8700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 19:07:37.233262    8700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 19:07:37.415850    8700 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 19:07:37.442851    8700 start.go:494] detecting cgroup driver to use...
	I0416 19:07:37.442851    8700 start.go:498] using "systemd" cgroup driver as enforced via flags
	I0416 19:07:37.450856    8700 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 19:07:37.490850    8700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 19:07:37.527855    8700 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 19:07:37.587151    8700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 19:07:37.622432    8700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 19:07:37.654680    8700 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 19:07:37.707761    8700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 19:07:37.735242    8700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 19:07:37.784003    8700 ssh_runner.go:195] Run: which cri-dockerd
	I0416 19:07:37.800116    8700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 19:07:37.817313    8700 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 19:07:37.856798    8700 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 19:07:38.040998    8700 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 19:07:38.227823    8700 docker.go:574] configuring docker to use "systemd" as cgroup driver...
	I0416 19:07:38.227823    8700 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0416 19:07:38.267836    8700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 19:07:38.453999    8700 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 19:08:39.601919    8700 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1444471s)
	I0416 19:08:39.611709    8700 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0416 19:08:39.647845    8700 out.go:177] 
	W0416 19:08:39.647845    8700 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 16 19:07:07 force-systemd-env-124600 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[662]: time="2024-04-16T19:07:08.033267524Z" level=info msg="Starting up"
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[662]: time="2024-04-16T19:07:08.034233538Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[662]: time="2024-04-16T19:07:08.035168616Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=669
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.070993817Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.099264108Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.099354612Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.099493072Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.099519402Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.099691901Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.099807935Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.100020881Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.100153333Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.100173557Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.100186171Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.100284985Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.100706471Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.103972536Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.104095178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.104265474Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.104383610Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.104545997Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.104689463Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.104781269Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.162279055Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.162336721Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.162358547Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.162375566Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.162486594Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.162612840Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.162943921Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163164075Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163261187Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163337575Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163366709Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163512277Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163532300Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163549219Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163569643Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163583759Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163620602Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163636320Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163659446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163674864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163689080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163703597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163716913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163730829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163743143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163757159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163774979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163791799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163804514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163817829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163831044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163848464Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163870289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163886909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163899523Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163964298Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163982319Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163995634Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.164007648Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.164171036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.164264644Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.164285368Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.164656596Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.164789950Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.164828895Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.164848217Z" level=info msg="containerd successfully booted in 0.095426s"
	Apr 16 19:07:09 force-systemd-env-124600 dockerd[662]: time="2024-04-16T19:07:09.298626365Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 19:07:09 force-systemd-env-124600 dockerd[662]: time="2024-04-16T19:07:09.361391370Z" level=info msg="Loading containers: start."
	Apr 16 19:07:09 force-systemd-env-124600 dockerd[662]: time="2024-04-16T19:07:09.625901546Z" level=info msg="Loading containers: done."
	Apr 16 19:07:09 force-systemd-env-124600 dockerd[662]: time="2024-04-16T19:07:09.653771262Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 19:07:09 force-systemd-env-124600 dockerd[662]: time="2024-04-16T19:07:09.654182817Z" level=info msg="Daemon has completed initialization"
	Apr 16 19:07:09 force-systemd-env-124600 dockerd[662]: time="2024-04-16T19:07:09.735413937Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 19:07:09 force-systemd-env-124600 dockerd[662]: time="2024-04-16T19:07:09.735756716Z" level=info msg="API listen on [::]:2376"
	Apr 16 19:07:09 force-systemd-env-124600 systemd[1]: Started Docker Application Container Engine.
	Apr 16 19:07:38 force-systemd-env-124600 dockerd[662]: time="2024-04-16T19:07:38.655770653Z" level=info msg="Processing signal 'terminated'"
	Apr 16 19:07:38 force-systemd-env-124600 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 19:07:38 force-systemd-env-124600 dockerd[662]: time="2024-04-16T19:07:38.658794426Z" level=info msg="Daemon shutdown complete"
	Apr 16 19:07:38 force-systemd-env-124600 dockerd[662]: time="2024-04-16T19:07:38.658925059Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 19:07:38 force-systemd-env-124600 dockerd[662]: time="2024-04-16T19:07:38.658955667Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 19:07:38 force-systemd-env-124600 dockerd[662]: time="2024-04-16T19:07:38.659735567Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Apr 16 19:07:39 force-systemd-env-124600 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 19:07:39 force-systemd-env-124600 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 19:07:39 force-systemd-env-124600 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 19:07:39 force-systemd-env-124600 dockerd[1013]: time="2024-04-16T19:07:39.752452251Z" level=info msg="Starting up"
	Apr 16 19:08:39 force-systemd-env-124600 dockerd[1013]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 16 19:08:39 force-systemd-env-124600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 16 19:08:39 force-systemd-env-124600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 16 19:08:39 force-systemd-env-124600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 16 19:07:07 force-systemd-env-124600 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[662]: time="2024-04-16T19:07:08.033267524Z" level=info msg="Starting up"
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[662]: time="2024-04-16T19:07:08.034233538Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[662]: time="2024-04-16T19:07:08.035168616Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=669
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.070993817Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.099264108Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.099354612Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.099493072Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.099519402Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.099691901Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.099807935Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.100020881Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.100153333Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.100173557Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.100186171Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.100284985Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.100706471Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.103972536Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.104095178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.104265474Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.104383610Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.104545997Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.104689463Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.104781269Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.162279055Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.162336721Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.162358547Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.162375566Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.162486594Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.162612840Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.162943921Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163164075Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163261187Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163337575Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163366709Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163512277Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163532300Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163549219Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163569643Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163583759Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163620602Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163636320Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163659446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163674864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163689080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163703597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163716913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163730829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163743143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163757159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163774979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163791799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163804514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163817829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163831044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163848464Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163870289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163886909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163899523Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163964298Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163982319Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.163995634Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.164007648Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.164171036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.164264644Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.164285368Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.164656596Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.164789950Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.164828895Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 19:07:08 force-systemd-env-124600 dockerd[669]: time="2024-04-16T19:07:08.164848217Z" level=info msg="containerd successfully booted in 0.095426s"
	Apr 16 19:07:09 force-systemd-env-124600 dockerd[662]: time="2024-04-16T19:07:09.298626365Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 19:07:09 force-systemd-env-124600 dockerd[662]: time="2024-04-16T19:07:09.361391370Z" level=info msg="Loading containers: start."
	Apr 16 19:07:09 force-systemd-env-124600 dockerd[662]: time="2024-04-16T19:07:09.625901546Z" level=info msg="Loading containers: done."
	Apr 16 19:07:09 force-systemd-env-124600 dockerd[662]: time="2024-04-16T19:07:09.653771262Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 19:07:09 force-systemd-env-124600 dockerd[662]: time="2024-04-16T19:07:09.654182817Z" level=info msg="Daemon has completed initialization"
	Apr 16 19:07:09 force-systemd-env-124600 dockerd[662]: time="2024-04-16T19:07:09.735413937Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 19:07:09 force-systemd-env-124600 dockerd[662]: time="2024-04-16T19:07:09.735756716Z" level=info msg="API listen on [::]:2376"
	Apr 16 19:07:09 force-systemd-env-124600 systemd[1]: Started Docker Application Container Engine.
	Apr 16 19:07:38 force-systemd-env-124600 dockerd[662]: time="2024-04-16T19:07:38.655770653Z" level=info msg="Processing signal 'terminated'"
	Apr 16 19:07:38 force-systemd-env-124600 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 19:07:38 force-systemd-env-124600 dockerd[662]: time="2024-04-16T19:07:38.658794426Z" level=info msg="Daemon shutdown complete"
	Apr 16 19:07:38 force-systemd-env-124600 dockerd[662]: time="2024-04-16T19:07:38.658925059Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 19:07:38 force-systemd-env-124600 dockerd[662]: time="2024-04-16T19:07:38.658955667Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 19:07:38 force-systemd-env-124600 dockerd[662]: time="2024-04-16T19:07:38.659735567Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Apr 16 19:07:39 force-systemd-env-124600 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 19:07:39 force-systemd-env-124600 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 19:07:39 force-systemd-env-124600 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 19:07:39 force-systemd-env-124600 dockerd[1013]: time="2024-04-16T19:07:39.752452251Z" level=info msg="Starting up"
	Apr 16 19:08:39 force-systemd-env-124600 dockerd[1013]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 16 19:08:39 force-systemd-env-124600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 16 19:08:39 force-systemd-env-124600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 16 19:08:39 force-systemd-env-124600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0416 19:08:39.649392    8700 out.go:239] * 
	* 
	W0416 19:08:39.650660    8700 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 19:08:39.651707    8700 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p force-systemd-env-124600 --memory=2048 --alsologtostderr -v=5 --driver=hyperv" : exit status 90
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-124600 ssh "docker info --format {{.CgroupDriver}}"
E0416 19:09:10.521188    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-124600 ssh "docker info --format {{.CgroupDriver}}": (1m0.1106023s)
docker_test.go:115: expected systemd cgroup driver, got: 
-- stdout --
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 19:08:39.990840     720 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
panic.go:626: *** TestForceSystemdEnv FAILED at 2024-04-16 19:09:39.9615884 +0000 UTC m=+10129.382263801
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-env-124600 -n force-systemd-env-124600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-env-124600 -n force-systemd-env-124600: exit status 6 (11.4587976s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 19:09:40.076032    8268 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0416 19:09:51.359345    8268 status.go:417] kubeconfig endpoint: get endpoint: "force-systemd-env-124600" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "force-systemd-env-124600" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "force-systemd-env-124600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-124600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-124600: (1m3.1484851s)
--- FAIL: TestForceSystemdEnv (423.98s)

                                                
                                    
x
+
TestErrorSpam/setup (197.38s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-199300 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 --driver=hyperv
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p nospam-199300 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 --driver=hyperv: exit status 90 (3m17.3646729s)

                                                
                                                
-- stdout --
	* [nospam-199300] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "nospam-199300" primary control-plane node in "nospam-199300" cluster
	* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 16:26:14.417310    6656 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 16 16:28:01 nospam-199300 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 16:28:01 nospam-199300 dockerd[659]: time="2024-04-16T16:28:01.636443442Z" level=info msg="Starting up"
	Apr 16 16:28:01 nospam-199300 dockerd[659]: time="2024-04-16T16:28:01.637307389Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 16:28:01 nospam-199300 dockerd[659]: time="2024-04-16T16:28:01.638394975Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=665
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.671512625Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.695062142Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.695104949Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.695155858Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.695169160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.695237672Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.695330288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.695581331Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.695657244Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.695671946Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.695680948Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.695750660Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.696219640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.698994713Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.699106032Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.699345373Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.699461893Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.699565811Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.699676529Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.699751342Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.710344349Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.710528081Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.710657003Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.710683707Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.710699110Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.710808228Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711142485Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711356122Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711479143Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711499746Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711512749Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711524651Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711536053Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711548155Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711567358Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711580160Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711591162Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711607765Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711626668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711639070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711650172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711661674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711673176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711686878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711698280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711709382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711756290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711773193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711783695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711794697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711846706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.712116352Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.712274179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.712289981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.712301783Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.712350592Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.712369995Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.712382997Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.712394199Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.712527522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.712557927Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.712574030Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.712802869Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.713005403Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.713076215Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.713096119Z" level=info msg="containerd successfully booted in 0.045085s"
	Apr 16 16:28:02 nospam-199300 dockerd[659]: time="2024-04-16T16:28:02.684271576Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 16:28:02 nospam-199300 dockerd[659]: time="2024-04-16T16:28:02.697510120Z" level=info msg="Loading containers: start."
	Apr 16 16:28:02 nospam-199300 dockerd[659]: time="2024-04-16T16:28:02.906711235Z" level=info msg="Loading containers: done."
	Apr 16 16:28:02 nospam-199300 dockerd[659]: time="2024-04-16T16:28:02.925246985Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 16:28:02 nospam-199300 dockerd[659]: time="2024-04-16T16:28:02.925491525Z" level=info msg="Daemon has completed initialization"
	Apr 16 16:28:02 nospam-199300 dockerd[659]: time="2024-04-16T16:28:02.988788438Z" level=info msg="API listen on [::]:2376"
	Apr 16 16:28:02 nospam-199300 systemd[1]: Started Docker Application Container Engine.
	Apr 16 16:28:02 nospam-199300 dockerd[659]: time="2024-04-16T16:28:02.989755497Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 16:28:30 nospam-199300 dockerd[659]: time="2024-04-16T16:28:30.694614723Z" level=info msg="Processing signal 'terminated'"
	Apr 16 16:28:30 nospam-199300 dockerd[659]: time="2024-04-16T16:28:30.696213580Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 16:28:30 nospam-199300 dockerd[659]: time="2024-04-16T16:28:30.696479789Z" level=info msg="Daemon shutdown complete"
	Apr 16 16:28:30 nospam-199300 dockerd[659]: time="2024-04-16T16:28:30.696556092Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 16:28:30 nospam-199300 dockerd[659]: time="2024-04-16T16:28:30.696577293Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 16:28:30 nospam-199300 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 16:28:31 nospam-199300 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 16:28:31 nospam-199300 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 16:28:31 nospam-199300 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 16:28:31 nospam-199300 dockerd[1007]: time="2024-04-16T16:28:31.769012417Z" level=info msg="Starting up"
	Apr 16 16:29:31 nospam-199300 dockerd[1007]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 16 16:29:31 nospam-199300 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 16 16:29:31 nospam-199300 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 16 16:29:31 nospam-199300 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-windows-amd64.exe start -p nospam-199300 -n=1 --memory=2250 --wait=false --log_dir=C:\\Users\\jenkins.minikube5\\AppData\\Local\\Temp\\nospam-199300 --driver=hyperv" failed: exit status 90
error_spam_test.go:96: unexpected stderr: "W0416 16:26:14.417310    6656 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:96: unexpected stderr: "X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "Job for docker.service failed because the control process exited with error code."
error_spam_test.go:96: unexpected stderr: "See \"systemctl status docker.service\" and \"journalctl -xeu docker.service\" for details."
error_spam_test.go:96: unexpected stderr: "sudo journalctl --no-pager -u docker:"
error_spam_test.go:96: unexpected stderr: "-- stdout --"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 systemd[1]: Starting Docker Application Container Engine..."
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[659]: time=\"2024-04-16T16:28:01.636443442Z\" level=info msg=\"Starting up\""
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[659]: time=\"2024-04-16T16:28:01.637307389Z\" level=info msg=\"containerd not running, starting managed containerd\""
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[659]: time=\"2024-04-16T16:28:01.638394975Z\" level=info msg=\"started new containerd process\" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=665"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.671512625Z\" level=info msg=\"starting containerd\" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.695062142Z\" level=info msg=\"loading plugin \\\"io.containerd.event.v1.exchange\\\"...\" type=io.containerd.event.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.695104949Z\" level=info msg=\"loading plugin \\\"io.containerd.internal.v1.opt\\\"...\" type=io.containerd.internal.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.695155858Z\" level=info msg=\"loading plugin \\\"io.containerd.warning.v1.deprecations\\\"...\" type=io.containerd.warning.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.695169160Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.blockfile\\\"...\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.695237672Z\" level=info msg=\"skip loading plugin \\\"io.containerd.snapshotter.v1.blockfile\\\"...\" error=\"no scratch file generator: skip plugin\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.695330288Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.btrfs\\\"...\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.695581331Z\" level=info msg=\"skip loading plugin \\\"io.containerd.snapshotter.v1.btrfs\\\"...\" error=\"path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.695657244Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.devmapper\\\"...\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.695671946Z\" level=warning msg=\"failed to load plugin io.containerd.snapshotter.v1.devmapper\" error=\"devmapper not configured\""
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.695680948Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.native\\\"...\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.695750660Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.overlayfs\\\"...\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.696219640Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.aufs\\\"...\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.698994713Z\" level=info msg=\"skip loading plugin \\\"io.containerd.snapshotter.v1.aufs\\\"...\" error=\"aufs is not supported (modprobe aufs failed: exit status 1 \\\"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\\\n\\\"): skip plugin\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.699106032Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.zfs\\\"...\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.699345373Z\" level=info msg=\"skip loading plugin \\\"io.containerd.snapshotter.v1.zfs\\\"...\" error=\"path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.699461893Z\" level=info msg=\"loading plugin \\\"io.containerd.content.v1.content\\\"...\" type=io.containerd.content.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.699565811Z\" level=info msg=\"loading plugin \\\"io.containerd.metadata.v1.bolt\\\"...\" type=io.containerd.metadata.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.699676529Z\" level=warning msg=\"could not use snapshotter devmapper in metadata plugin\" error=\"devmapper not configured\""
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.699751342Z\" level=info msg=\"metadata content store policy set\" policy=shared"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.710344349Z\" level=info msg=\"loading plugin \\\"io.containerd.gc.v1.scheduler\\\"...\" type=io.containerd.gc.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.710528081Z\" level=info msg=\"loading plugin \\\"io.containerd.differ.v1.walking\\\"...\" type=io.containerd.differ.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.710657003Z\" level=info msg=\"loading plugin \\\"io.containerd.lease.v1.manager\\\"...\" type=io.containerd.lease.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.710683707Z\" level=info msg=\"loading plugin \\\"io.containerd.streaming.v1.manager\\\"...\" type=io.containerd.streaming.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.710699110Z\" level=info msg=\"loading plugin \\\"io.containerd.runtime.v1.linux\\\"...\" type=io.containerd.runtime.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.710808228Z\" level=info msg=\"loading plugin \\\"io.containerd.monitor.v1.cgroups\\\"...\" type=io.containerd.monitor.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.711142485Z\" level=info msg=\"loading plugin \\\"io.containerd.runtime.v2.task\\\"...\" type=io.containerd.runtime.v2"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.711356122Z\" level=info msg=\"loading plugin \\\"io.containerd.runtime.v2.shim\\\"...\" type=io.containerd.runtime.v2"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.711479143Z\" level=info msg=\"loading plugin \\\"io.containerd.sandbox.store.v1.local\\\"...\" type=io.containerd.sandbox.store.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.711499746Z\" level=info msg=\"loading plugin \\\"io.containerd.sandbox.controller.v1.local\\\"...\" type=io.containerd.sandbox.controller.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.711512749Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.containers-service\\\"...\" type=io.containerd.service.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.711524651Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.content-service\\\"...\" type=io.containerd.service.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.711536053Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.diff-service\\\"...\" type=io.containerd.service.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.711548155Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.images-service\\\"...\" type=io.containerd.service.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.711567358Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.introspection-service\\\"...\" type=io.containerd.service.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.711580160Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.namespaces-service\\\"...\" type=io.containerd.service.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.711591162Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.snapshots-service\\\"...\" type=io.containerd.service.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.711607765Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.tasks-service\\\"...\" type=io.containerd.service.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.711626668Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.containers\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.711639070Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.content\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.711650172Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.diff\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.711661674Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.events\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.711673176Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.images\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.711686878Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.introspection\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.711698280Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.leases\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.711709382Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.namespaces\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.711756290Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.sandbox-controllers\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.711773193Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.sandboxes\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.711783695Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.snapshots\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.711794697Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.streaming\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.711846706Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.tasks\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.712116352Z\" level=info msg=\"loading plugin \\\"io.containerd.transfer.v1.local\\\"...\" type=io.containerd.transfer.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.712274179Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.transfer\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.712289981Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.version\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.712301783Z\" level=info msg=\"loading plugin \\\"io.containerd.internal.v1.restart\\\"...\" type=io.containerd.internal.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.712350592Z\" level=info msg=\"loading plugin \\\"io.containerd.tracing.processor.v1.otlp\\\"...\" type=io.containerd.tracing.processor.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.712369995Z\" level=info msg=\"skip loading plugin \\\"io.containerd.tracing.processor.v1.otlp\\\"...\" error=\"no OpenTelemetry endpoint: skip plugin\" type=io.containerd.tracing.processor.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.712382997Z\" level=info msg=\"loading plugin \\\"io.containerd.internal.v1.tracing\\\"...\" type=io.containerd.internal.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.712394199Z\" level=info msg=\"skipping tracing processor initialization (no tracing plugin)\" error=\"no OpenTelemetry endpoint: skip plugin\""
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.712527522Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.healthcheck\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.712557927Z\" level=info msg=\"loading plugin \\\"io.containerd.nri.v1.nri\\\"...\" type=io.containerd.nri.v1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.712574030Z\" level=info msg=\"NRI interface is disabled by configuration.\""
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.712802869Z\" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.713005403Z\" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.713076215Z\" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:01 nospam-199300 dockerd[665]: time=\"2024-04-16T16:28:01.713096119Z\" level=info msg=\"containerd successfully booted in 0.045085s\""
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:02 nospam-199300 dockerd[659]: time=\"2024-04-16T16:28:02.684271576Z\" level=info msg=\"[graphdriver] trying configured driver: overlay2\""
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:02 nospam-199300 dockerd[659]: time=\"2024-04-16T16:28:02.697510120Z\" level=info msg=\"Loading containers: start.\""
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:02 nospam-199300 dockerd[659]: time=\"2024-04-16T16:28:02.906711235Z\" level=info msg=\"Loading containers: done.\""
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:02 nospam-199300 dockerd[659]: time=\"2024-04-16T16:28:02.925246985Z\" level=info msg=\"Docker daemon\" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:02 nospam-199300 dockerd[659]: time=\"2024-04-16T16:28:02.925491525Z\" level=info msg=\"Daemon has completed initialization\""
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:02 nospam-199300 dockerd[659]: time=\"2024-04-16T16:28:02.988788438Z\" level=info msg=\"API listen on [::]:2376\""
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:02 nospam-199300 systemd[1]: Started Docker Application Container Engine."
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:02 nospam-199300 dockerd[659]: time=\"2024-04-16T16:28:02.989755497Z\" level=info msg=\"API listen on /var/run/docker.sock\""
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:30 nospam-199300 dockerd[659]: time=\"2024-04-16T16:28:30.694614723Z\" level=info msg=\"Processing signal 'terminated'\""
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:30 nospam-199300 dockerd[659]: time=\"2024-04-16T16:28:30.696213580Z\" level=info msg=\"stopping event stream following graceful shutdown\" error=\"<nil>\" module=libcontainerd namespace=moby"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:30 nospam-199300 dockerd[659]: time=\"2024-04-16T16:28:30.696479789Z\" level=info msg=\"Daemon shutdown complete\""
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:30 nospam-199300 dockerd[659]: time=\"2024-04-16T16:28:30.696556092Z\" level=info msg=\"stopping healthcheck following graceful shutdown\" module=libcontainerd"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:30 nospam-199300 dockerd[659]: time=\"2024-04-16T16:28:30.696577293Z\" level=info msg=\"stopping event stream following graceful shutdown\" error=\"context canceled\" module=libcontainerd namespace=plugins.moby"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:30 nospam-199300 systemd[1]: Stopping Docker Application Container Engine..."
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:31 nospam-199300 systemd[1]: docker.service: Deactivated successfully."
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:31 nospam-199300 systemd[1]: Stopped Docker Application Container Engine."
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:31 nospam-199300 systemd[1]: Starting Docker Application Container Engine..."
error_spam_test.go:96: unexpected stderr: "Apr 16 16:28:31 nospam-199300 dockerd[1007]: time=\"2024-04-16T16:28:31.769012417Z\" level=info msg=\"Starting up\""
error_spam_test.go:96: unexpected stderr: "Apr 16 16:29:31 nospam-199300 dockerd[1007]: failed to start daemon: failed to dial \"/run/containerd/containerd.sock\": failed to dial \"/run/containerd/containerd.sock\": context deadline exceeded"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:29:31 nospam-199300 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE"
error_spam_test.go:96: unexpected stderr: "Apr 16 16:29:31 nospam-199300 systemd[1]: docker.service: Failed with result 'exit-code'."
error_spam_test.go:96: unexpected stderr: "Apr 16 16:29:31 nospam-199300 systemd[1]: Failed to start Docker Application Container Engine."
error_spam_test.go:96: unexpected stderr: "-- /stdout --"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-199300] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
- KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
- MINIKUBE_LOCATION=18649
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-199300" primary control-plane node in "nospam-199300" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
W0416 16:26:14.417310    6656 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.

                                                
                                                
sudo journalctl --no-pager -u docker:
-- stdout --
Apr 16 16:28:01 nospam-199300 systemd[1]: Starting Docker Application Container Engine...
Apr 16 16:28:01 nospam-199300 dockerd[659]: time="2024-04-16T16:28:01.636443442Z" level=info msg="Starting up"
Apr 16 16:28:01 nospam-199300 dockerd[659]: time="2024-04-16T16:28:01.637307389Z" level=info msg="containerd not running, starting managed containerd"
Apr 16 16:28:01 nospam-199300 dockerd[659]: time="2024-04-16T16:28:01.638394975Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=665
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.671512625Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.695062142Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.695104949Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.695155858Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.695169160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.695237672Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.695330288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.695581331Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.695657244Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.695671946Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.695680948Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.695750660Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.696219640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.698994713Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.699106032Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.699345373Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.699461893Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.699565811Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.699676529Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.699751342Z" level=info msg="metadata content store policy set" policy=shared
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.710344349Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.710528081Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.710657003Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.710683707Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.710699110Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.710808228Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711142485Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711356122Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711479143Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711499746Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711512749Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711524651Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711536053Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711548155Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711567358Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711580160Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711591162Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711607765Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711626668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711639070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711650172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711661674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711673176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711686878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711698280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711709382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711756290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711773193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711783695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711794697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.711846706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.712116352Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.712274179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.712289981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.712301783Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.712350592Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.712369995Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.712382997Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.712394199Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.712527522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.712557927Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.712574030Z" level=info msg="NRI interface is disabled by configuration."
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.712802869Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.713005403Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.713076215Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Apr 16 16:28:01 nospam-199300 dockerd[665]: time="2024-04-16T16:28:01.713096119Z" level=info msg="containerd successfully booted in 0.045085s"
Apr 16 16:28:02 nospam-199300 dockerd[659]: time="2024-04-16T16:28:02.684271576Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Apr 16 16:28:02 nospam-199300 dockerd[659]: time="2024-04-16T16:28:02.697510120Z" level=info msg="Loading containers: start."
Apr 16 16:28:02 nospam-199300 dockerd[659]: time="2024-04-16T16:28:02.906711235Z" level=info msg="Loading containers: done."
Apr 16 16:28:02 nospam-199300 dockerd[659]: time="2024-04-16T16:28:02.925246985Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
Apr 16 16:28:02 nospam-199300 dockerd[659]: time="2024-04-16T16:28:02.925491525Z" level=info msg="Daemon has completed initialization"
Apr 16 16:28:02 nospam-199300 dockerd[659]: time="2024-04-16T16:28:02.988788438Z" level=info msg="API listen on [::]:2376"
Apr 16 16:28:02 nospam-199300 systemd[1]: Started Docker Application Container Engine.
Apr 16 16:28:02 nospam-199300 dockerd[659]: time="2024-04-16T16:28:02.989755497Z" level=info msg="API listen on /var/run/docker.sock"
Apr 16 16:28:30 nospam-199300 dockerd[659]: time="2024-04-16T16:28:30.694614723Z" level=info msg="Processing signal 'terminated'"
Apr 16 16:28:30 nospam-199300 dockerd[659]: time="2024-04-16T16:28:30.696213580Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Apr 16 16:28:30 nospam-199300 dockerd[659]: time="2024-04-16T16:28:30.696479789Z" level=info msg="Daemon shutdown complete"
Apr 16 16:28:30 nospam-199300 dockerd[659]: time="2024-04-16T16:28:30.696556092Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Apr 16 16:28:30 nospam-199300 dockerd[659]: time="2024-04-16T16:28:30.696577293Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Apr 16 16:28:30 nospam-199300 systemd[1]: Stopping Docker Application Container Engine...
Apr 16 16:28:31 nospam-199300 systemd[1]: docker.service: Deactivated successfully.
Apr 16 16:28:31 nospam-199300 systemd[1]: Stopped Docker Application Container Engine.
Apr 16 16:28:31 nospam-199300 systemd[1]: Starting Docker Application Container Engine...
Apr 16 16:28:31 nospam-199300 dockerd[1007]: time="2024-04-16T16:28:31.769012417Z" level=info msg="Starting up"
Apr 16 16:29:31 nospam-199300 dockerd[1007]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 16 16:29:31 nospam-199300 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 16 16:29:31 nospam-199300 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 16 16:29:31 nospam-199300 systemd[1]: Failed to start Docker Application Container Engine.

                                                
                                                
-- /stdout --
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (197.38s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (30.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-538700 -n functional-538700
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-538700 -n functional-538700: (10.8810044s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 logs -n 25: (7.557286s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        |    Version     |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	| pause   | nospam-199300 --log_dir                                     | nospam-199300     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:30 UTC |                     |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 |                   |                   |                |                     |                     |
	|         | pause                                                       |                   |                   |                |                     |                     |
	| unpause | nospam-199300 --log_dir                                     | nospam-199300     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:30 UTC |                     |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 |                   |                   |                |                     |                     |
	|         | unpause                                                     |                   |                   |                |                     |                     |
	| unpause | nospam-199300 --log_dir                                     | nospam-199300     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:31 UTC |                     |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 |                   |                   |                |                     |                     |
	|         | unpause                                                     |                   |                   |                |                     |                     |
	| unpause | nospam-199300 --log_dir                                     | nospam-199300     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:32 UTC |                     |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 |                   |                   |                |                     |                     |
	|         | unpause                                                     |                   |                   |                |                     |                     |
	| stop    | nospam-199300 --log_dir                                     | nospam-199300     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:33 UTC | 16 Apr 24 16:34 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 |                   |                   |                |                     |                     |
	|         | stop                                                        |                   |                   |                |                     |                     |
	| stop    | nospam-199300 --log_dir                                     | nospam-199300     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:34 UTC | 16 Apr 24 16:34 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 |                   |                   |                |                     |                     |
	|         | stop                                                        |                   |                   |                |                     |                     |
	| stop    | nospam-199300 --log_dir                                     | nospam-199300     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:34 UTC | 16 Apr 24 16:35 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 |                   |                   |                |                     |                     |
	|         | stop                                                        |                   |                   |                |                     |                     |
	| delete  | -p nospam-199300                                            | nospam-199300     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:35 UTC | 16 Apr 24 16:35 UTC |
	| start   | -p functional-538700                                        | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:35 UTC | 16 Apr 24 16:38 UTC |
	|         | --memory=4000                                               |                   |                   |                |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |                |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |                |                     |                     |
	| start   | -p functional-538700                                        | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:38 UTC | 16 Apr 24 16:40 UTC |
	|         | --alsologtostderr -v=8                                      |                   |                   |                |                     |                     |
	| cache   | functional-538700 cache add                                 | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:40 UTC | 16 Apr 24 16:40 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |                |                     |                     |
	| cache   | functional-538700 cache add                                 | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:40 UTC | 16 Apr 24 16:40 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |                |                     |                     |
	| cache   | functional-538700 cache add                                 | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:40 UTC | 16 Apr 24 16:40 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |                |                     |                     |
	| cache   | functional-538700 cache add                                 | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:40 UTC | 16 Apr 24 16:40 UTC |
	|         | minikube-local-cache-test:functional-538700                 |                   |                   |                |                     |                     |
	| cache   | functional-538700 cache delete                              | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:40 UTC | 16 Apr 24 16:40 UTC |
	|         | minikube-local-cache-test:functional-538700                 |                   |                   |                |                     |                     |
	| cache   | delete                                                      | minikube          | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:40 UTC | 16 Apr 24 16:40 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |                |                     |                     |
	| cache   | list                                                        | minikube          | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:40 UTC | 16 Apr 24 16:40 UTC |
	| ssh     | functional-538700 ssh sudo                                  | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:40 UTC | 16 Apr 24 16:41 UTC |
	|         | crictl images                                               |                   |                   |                |                     |                     |
	| ssh     | functional-538700                                           | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:41 UTC | 16 Apr 24 16:41 UTC |
	|         | ssh sudo docker rmi                                         |                   |                   |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |                |                     |                     |
	| ssh     | functional-538700 ssh                                       | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:41 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |                |                     |                     |
	| cache   | functional-538700 cache reload                              | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:41 UTC | 16 Apr 24 16:41 UTC |
	| ssh     | functional-538700 ssh                                       | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:41 UTC | 16 Apr 24 16:41 UTC |
	|         | sudo crictl inspecti                                        |                   |                   |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |                |                     |                     |
	| cache   | delete                                                      | minikube          | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:41 UTC | 16 Apr 24 16:41 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |                |                     |                     |
	| cache   | delete                                                      | minikube          | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:41 UTC | 16 Apr 24 16:41 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |                |                     |                     |
	| kubectl | functional-538700 kubectl --                                | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:41 UTC | 16 Apr 24 16:41 UTC |
	|         | --context functional-538700                                 |                   |                   |                |                     |                     |
	|         | get pods                                                    |                   |                   |                |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 16:38:25
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 16:38:25.879471    6320 out.go:291] Setting OutFile to fd 984 ...
	I0416 16:38:25.879966    6320 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:38:25.879966    6320 out.go:304] Setting ErrFile to fd 988...
	I0416 16:38:25.879966    6320 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:38:25.904949    6320 out.go:298] Setting JSON to false
	I0416 16:38:25.907613    6320 start.go:129] hostinfo: {"hostname":"minikube5","uptime":23135,"bootTime":1713262370,"procs":206,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0416 16:38:25.908624    6320 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0416 16:38:25.909836    6320 out.go:177] * [functional-538700] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0416 16:38:25.910347    6320 notify.go:220] Checking for updates...
	I0416 16:38:25.911004    6320 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:38:25.911538    6320 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 16:38:25.912389    6320 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0416 16:38:25.912657    6320 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 16:38:25.913545    6320 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 16:38:25.915108    6320 config.go:182] Loaded profile config "functional-538700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:38:25.915384    6320 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 16:38:30.692020    6320 out.go:177] * Using the hyperv driver based on existing profile
	I0416 16:38:30.692539    6320 start.go:297] selected driver: hyperv
	I0416 16:38:30.692539    6320 start.go:901] validating driver "hyperv" against &{Name:functional-538700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.3 ClusterName:functional-538700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.95.169 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:38:30.692653    6320 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 16:38:30.732381    6320 cni.go:84] Creating CNI manager for ""
	I0416 16:38:30.732381    6320 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0416 16:38:30.732381    6320 start.go:340] cluster config:
	{Name:functional-538700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-538700 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.95.169 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:38:30.732381    6320 iso.go:125] acquiring lock: {Name:mka3f8eef32f5becd06d05d1d837c2a92a8fa70c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 16:38:30.733555    6320 out.go:177] * Starting "functional-538700" primary control-plane node in "functional-538700" cluster
	I0416 16:38:30.735064    6320 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:38:30.735064    6320 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0416 16:38:30.735267    6320 cache.go:56] Caching tarball of preloaded images
	I0416 16:38:30.735538    6320 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 16:38:30.735538    6320 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 16:38:30.735538    6320 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\config.json ...
	I0416 16:38:30.737958    6320 start.go:360] acquireMachinesLock for functional-538700: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:38:30.737958    6320 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-538700"
	I0416 16:38:30.737958    6320 start.go:96] Skipping create...Using existing machine configuration
	I0416 16:38:30.737958    6320 fix.go:54] fixHost starting: 
	I0416 16:38:30.739019    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-538700 ).state
	I0416 16:38:33.227508    6320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:38:33.227508    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:38:33.227508    6320 fix.go:112] recreateIfNeeded on functional-538700: state=Running err=<nil>
	W0416 16:38:33.227508    6320 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 16:38:33.228206    6320 out.go:177] * Updating the running hyperv "functional-538700" VM ...
	I0416 16:38:33.228943    6320 machine.go:94] provisionDockerMachine start ...
	I0416 16:38:33.228943    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-538700 ).state
	I0416 16:38:35.192457    6320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:38:35.192457    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:38:35.193086    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-538700 ).networkadapters[0]).ipaddresses[0]
	I0416 16:38:37.521956    6320 main.go:141] libmachine: [stdout =====>] : 172.19.95.169
	
	I0416 16:38:37.522430    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:38:37.526290    6320 main.go:141] libmachine: Using SSH client type: native
	I0416 16:38:37.526697    6320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.95.169 22 <nil> <nil>}
	I0416 16:38:37.526802    6320 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 16:38:37.663140    6320 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-538700
	
	I0416 16:38:37.663140    6320 buildroot.go:166] provisioning hostname "functional-538700"
	I0416 16:38:37.663678    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-538700 ).state
	I0416 16:38:39.606649    6320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:38:39.607391    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:38:39.607466    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-538700 ).networkadapters[0]).ipaddresses[0]
	I0416 16:38:41.886227    6320 main.go:141] libmachine: [stdout =====>] : 172.19.95.169
	
	I0416 16:38:41.886227    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:38:41.892235    6320 main.go:141] libmachine: Using SSH client type: native
	I0416 16:38:41.892925    6320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.95.169 22 <nil> <nil>}
	I0416 16:38:41.892925    6320 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-538700 && echo "functional-538700" | sudo tee /etc/hostname
	I0416 16:38:42.051465    6320 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-538700
	
	I0416 16:38:42.051465    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-538700 ).state
	I0416 16:38:43.966370    6320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:38:43.966370    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:38:43.966370    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-538700 ).networkadapters[0]).ipaddresses[0]
	I0416 16:38:46.259902    6320 main.go:141] libmachine: [stdout =====>] : 172.19.95.169
	
	I0416 16:38:46.259902    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:38:46.264296    6320 main.go:141] libmachine: Using SSH client type: native
	I0416 16:38:46.264708    6320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.95.169 22 <nil> <nil>}
	I0416 16:38:46.264708    6320 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-538700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-538700/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-538700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:38:46.406311    6320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:38:46.406311    6320 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 16:38:46.406311    6320 buildroot.go:174] setting up certificates
	I0416 16:38:46.406311    6320 provision.go:84] configureAuth start
	I0416 16:38:46.406311    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-538700 ).state
	I0416 16:38:48.329953    6320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:38:48.330421    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:38:48.330508    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-538700 ).networkadapters[0]).ipaddresses[0]
	I0416 16:38:50.623654    6320 main.go:141] libmachine: [stdout =====>] : 172.19.95.169
	
	I0416 16:38:50.623794    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:38:50.623869    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-538700 ).state
	I0416 16:38:52.583571    6320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:38:52.584392    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:38:52.584392    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-538700 ).networkadapters[0]).ipaddresses[0]
	I0416 16:38:54.914166    6320 main.go:141] libmachine: [stdout =====>] : 172.19.95.169
	
	I0416 16:38:54.914166    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:38:54.914166    6320 provision.go:143] copyHostCerts
	I0416 16:38:54.914166    6320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 16:38:54.914763    6320 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 16:38:54.914763    6320 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 16:38:54.915144    6320 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 16:38:54.916076    6320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 16:38:54.916294    6320 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 16:38:54.916371    6320 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 16:38:54.916530    6320 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 16:38:54.917295    6320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 16:38:54.917479    6320 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 16:38:54.917479    6320 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 16:38:54.917774    6320 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 16:38:54.918382    6320 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-538700 san=[127.0.0.1 172.19.95.169 functional-538700 localhost minikube]
	I0416 16:38:55.063490    6320 provision.go:177] copyRemoteCerts
	I0416 16:38:55.074308    6320 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:38:55.074308    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-538700 ).state
	I0416 16:38:57.008599    6320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:38:57.008599    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:38:57.008599    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-538700 ).networkadapters[0]).ipaddresses[0]
	I0416 16:38:59.332320    6320 main.go:141] libmachine: [stdout =====>] : 172.19.95.169
	
	I0416 16:38:59.332596    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:38:59.332762    6320 sshutil.go:53] new ssh client: &{IP:172.19.95.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-538700\id_rsa Username:docker}
	I0416 16:38:59.447954    6320 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3733979s)
	I0416 16:38:59.448112    6320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 16:38:59.448707    6320 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 16:38:59.492406    6320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 16:38:59.492935    6320 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0416 16:38:59.540346    6320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 16:38:59.540799    6320 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 16:38:59.600787    6320 provision.go:87] duration metric: took 13.1937282s to configureAuth
	I0416 16:38:59.600944    6320 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:38:59.601575    6320 config.go:182] Loaded profile config "functional-538700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:38:59.601693    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-538700 ).state
	I0416 16:39:01.536890    6320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:39:01.536890    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:39:01.536890    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-538700 ).networkadapters[0]).ipaddresses[0]
	I0416 16:39:03.828416    6320 main.go:141] libmachine: [stdout =====>] : 172.19.95.169
	
	I0416 16:39:03.828416    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:39:03.833403    6320 main.go:141] libmachine: Using SSH client type: native
	I0416 16:39:03.834017    6320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.95.169 22 <nil> <nil>}
	I0416 16:39:03.834017    6320 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 16:39:03.968266    6320 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 16:39:03.968266    6320 buildroot.go:70] root file system type: tmpfs
	I0416 16:39:03.969042    6320 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 16:39:03.969180    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-538700 ).state
	I0416 16:39:05.965307    6320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:39:05.965307    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:39:05.965307    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-538700 ).networkadapters[0]).ipaddresses[0]
	I0416 16:39:08.339660    6320 main.go:141] libmachine: [stdout =====>] : 172.19.95.169
	
	I0416 16:39:08.339660    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:39:08.345264    6320 main.go:141] libmachine: Using SSH client type: native
	I0416 16:39:08.345264    6320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.95.169 22 <nil> <nil>}
	I0416 16:39:08.345957    6320 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 16:39:08.510560    6320 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 16:39:08.510726    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-538700 ).state
	I0416 16:39:10.478138    6320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:39:10.478596    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:39:10.478596    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-538700 ).networkadapters[0]).ipaddresses[0]
	I0416 16:39:12.828855    6320 main.go:141] libmachine: [stdout =====>] : 172.19.95.169
	
	I0416 16:39:12.828855    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:39:12.833035    6320 main.go:141] libmachine: Using SSH client type: native
	I0416 16:39:12.833190    6320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.95.169 22 <nil> <nil>}
	I0416 16:39:12.833190    6320 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 16:39:12.985652    6320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:39:12.985652    6320 machine.go:97] duration metric: took 39.7544542s to provisionDockerMachine
	I0416 16:39:12.985652    6320 start.go:293] postStartSetup for "functional-538700" (driver="hyperv")
	I0416 16:39:12.985652    6320 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:39:12.996955    6320 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:39:12.996955    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-538700 ).state
	I0416 16:39:14.898847    6320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:39:14.898847    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:39:14.898847    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-538700 ).networkadapters[0]).ipaddresses[0]
	I0416 16:39:17.196088    6320 main.go:141] libmachine: [stdout =====>] : 172.19.95.169
	
	I0416 16:39:17.196239    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:39:17.196239    6320 sshutil.go:53] new ssh client: &{IP:172.19.95.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-538700\id_rsa Username:docker}
	I0416 16:39:17.316586    6320 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3193857s)
	I0416 16:39:17.330000    6320 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:39:17.337546    6320 command_runner.go:130] > NAME=Buildroot
	I0416 16:39:17.337546    6320 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0416 16:39:17.337546    6320 command_runner.go:130] > ID=buildroot
	I0416 16:39:17.337546    6320 command_runner.go:130] > VERSION_ID=2023.02.9
	I0416 16:39:17.337546    6320 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0416 16:39:17.337734    6320 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:39:17.337734    6320 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 16:39:17.338026    6320 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 16:39:17.338733    6320 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 16:39:17.338733    6320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 16:39:17.339416    6320 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\5324\hosts -> hosts in /etc/test/nested/copy/5324
	I0416 16:39:17.339416    6320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\5324\hosts -> /etc/test/nested/copy/5324/hosts
	I0416 16:39:17.347989    6320 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/5324
	I0416 16:39:17.366235    6320 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 16:39:17.409823    6320 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\5324\hosts --> /etc/test/nested/copy/5324/hosts (40 bytes)
	I0416 16:39:17.450678    6320 start.go:296] duration metric: took 4.4647728s for postStartSetup
	I0416 16:39:17.450843    6320 fix.go:56] duration metric: took 46.7102363s for fixHost
	I0416 16:39:17.450843    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-538700 ).state
	I0416 16:39:19.331509    6320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:39:19.331509    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:39:19.331509    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-538700 ).networkadapters[0]).ipaddresses[0]
	I0416 16:39:21.595692    6320 main.go:141] libmachine: [stdout =====>] : 172.19.95.169
	
	I0416 16:39:21.595692    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:39:21.599824    6320 main.go:141] libmachine: Using SSH client type: native
	I0416 16:39:21.600148    6320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.95.169 22 <nil> <nil>}
	I0416 16:39:21.600148    6320 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 16:39:21.745163    6320 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713285561.910831470
	
	I0416 16:39:21.745163    6320 fix.go:216] guest clock: 1713285561.910831470
	I0416 16:39:21.745163    6320 fix.go:229] Guest: 2024-04-16 16:39:21.91083147 +0000 UTC Remote: 2024-04-16 16:39:17.4508431 +0000 UTC m=+51.707335701 (delta=4.45998837s)
	I0416 16:39:21.745163    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-538700 ).state
	I0416 16:39:23.606046    6320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:39:23.606046    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:39:23.606204    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-538700 ).networkadapters[0]).ipaddresses[0]
	I0416 16:39:25.907862    6320 main.go:141] libmachine: [stdout =====>] : 172.19.95.169
	
	I0416 16:39:25.908749    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:39:25.912231    6320 main.go:141] libmachine: Using SSH client type: native
	I0416 16:39:25.912231    6320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.95.169 22 <nil> <nil>}
	I0416 16:39:25.912231    6320 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713285561
	I0416 16:39:26.066782    6320 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 16:39:21 UTC 2024
	
	I0416 16:39:26.066782    6320 fix.go:236] clock set: Tue Apr 16 16:39:21 UTC 2024
	 (err=<nil>)
	I0416 16:39:26.066782    6320 start.go:83] releasing machines lock for "functional-538700", held for 55.325687s
	I0416 16:39:26.066782    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-538700 ).state
	I0416 16:39:28.007457    6320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:39:28.007837    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:39:28.007837    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-538700 ).networkadapters[0]).ipaddresses[0]
	I0416 16:39:30.292722    6320 main.go:141] libmachine: [stdout =====>] : 172.19.95.169
	
	I0416 16:39:30.292793    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:39:30.302091    6320 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:39:30.302091    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-538700 ).state
	I0416 16:39:30.303080    6320 ssh_runner.go:195] Run: cat /version.json
	I0416 16:39:30.303080    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-538700 ).state
	I0416 16:39:32.267914    6320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:39:32.268589    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:39:32.268589    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-538700 ).networkadapters[0]).ipaddresses[0]
	I0416 16:39:32.283282    6320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:39:32.283282    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:39:32.283282    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-538700 ).networkadapters[0]).ipaddresses[0]
	I0416 16:39:34.665145    6320 main.go:141] libmachine: [stdout =====>] : 172.19.95.169
	
	I0416 16:39:34.666097    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:39:34.666097    6320 sshutil.go:53] new ssh client: &{IP:172.19.95.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-538700\id_rsa Username:docker}
	I0416 16:39:34.688216    6320 main.go:141] libmachine: [stdout =====>] : 172.19.95.169
	
	I0416 16:39:34.688586    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:39:34.689181    6320 sshutil.go:53] new ssh client: &{IP:172.19.95.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-538700\id_rsa Username:docker}
	I0416 16:39:34.772715    6320 command_runner.go:130] > {"iso_version": "v1.33.0-1713236417-18649", "kicbase_version": "v0.0.43-1713215244-18647", "minikube_version": "v1.33.0-beta.0", "commit": "4ec1a3e88a9f3ffb3930e555284d907468ae83a6"}
	I0416 16:39:34.772715    6320 ssh_runner.go:235] Completed: cat /version.json: (4.4693815s)
	I0416 16:39:34.783447    6320 ssh_runner.go:195] Run: systemctl --version
	I0416 16:39:34.840466    6320 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0416 16:39:34.840577    6320 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5381182s)
	I0416 16:39:34.840577    6320 command_runner.go:130] > systemd 252 (252)
	I0416 16:39:34.840577    6320 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0416 16:39:34.852570    6320 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0416 16:39:34.860830    6320 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0416 16:39:34.860950    6320 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:39:34.873577    6320 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:39:34.890113    6320 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0416 16:39:34.890113    6320 start.go:494] detecting cgroup driver to use...
	I0416 16:39:34.890113    6320 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:39:34.926554    6320 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0416 16:39:34.936416    6320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 16:39:34.964731    6320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 16:39:34.983282    6320 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 16:39:34.991231    6320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 16:39:35.018761    6320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:39:35.048002    6320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 16:39:35.077544    6320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:39:35.106417    6320 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:39:35.137874    6320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 16:39:35.168364    6320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 16:39:35.200609    6320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 16:39:35.227809    6320 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:39:35.245157    6320 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0416 16:39:35.252608    6320 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:39:35.285069    6320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:39:35.557447    6320 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 16:39:35.598277    6320 start.go:494] detecting cgroup driver to use...
	I0416 16:39:35.609163    6320 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 16:39:35.629953    6320 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0416 16:39:35.630005    6320 command_runner.go:130] > [Unit]
	I0416 16:39:35.630059    6320 command_runner.go:130] > Description=Docker Application Container Engine
	I0416 16:39:35.630059    6320 command_runner.go:130] > Documentation=https://docs.docker.com
	I0416 16:39:35.630099    6320 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0416 16:39:35.630099    6320 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0416 16:39:35.630099    6320 command_runner.go:130] > StartLimitBurst=3
	I0416 16:39:35.630099    6320 command_runner.go:130] > StartLimitIntervalSec=60
	I0416 16:39:35.630099    6320 command_runner.go:130] > [Service]
	I0416 16:39:35.630099    6320 command_runner.go:130] > Type=notify
	I0416 16:39:35.630390    6320 command_runner.go:130] > Restart=on-failure
	I0416 16:39:35.630390    6320 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0416 16:39:35.630434    6320 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0416 16:39:35.630465    6320 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0416 16:39:35.630465    6320 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0416 16:39:35.630465    6320 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0416 16:39:35.630465    6320 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0416 16:39:35.630465    6320 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0416 16:39:35.630465    6320 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0416 16:39:35.630465    6320 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0416 16:39:35.630465    6320 command_runner.go:130] > ExecStart=
	I0416 16:39:35.630465    6320 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0416 16:39:35.630465    6320 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0416 16:39:35.630465    6320 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0416 16:39:35.630465    6320 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0416 16:39:35.630465    6320 command_runner.go:130] > LimitNOFILE=infinity
	I0416 16:39:35.630465    6320 command_runner.go:130] > LimitNPROC=infinity
	I0416 16:39:35.630465    6320 command_runner.go:130] > LimitCORE=infinity
	I0416 16:39:35.630465    6320 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0416 16:39:35.630465    6320 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0416 16:39:35.630465    6320 command_runner.go:130] > TasksMax=infinity
	I0416 16:39:35.630465    6320 command_runner.go:130] > TimeoutStartSec=0
	I0416 16:39:35.630465    6320 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0416 16:39:35.630465    6320 command_runner.go:130] > Delegate=yes
	I0416 16:39:35.630465    6320 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0416 16:39:35.630465    6320 command_runner.go:130] > KillMode=process
	I0416 16:39:35.630465    6320 command_runner.go:130] > [Install]
	I0416 16:39:35.630465    6320 command_runner.go:130] > WantedBy=multi-user.target
	I0416 16:39:35.639533    6320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:39:35.668229    6320 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 16:39:35.703506    6320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:39:35.734705    6320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:39:35.753550    6320 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:39:35.782954    6320 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0416 16:39:35.795809    6320 ssh_runner.go:195] Run: which cri-dockerd
	I0416 16:39:35.800869    6320 command_runner.go:130] > /usr/bin/cri-dockerd
	I0416 16:39:35.810457    6320 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 16:39:35.826592    6320 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 16:39:35.869156    6320 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 16:39:36.107148    6320 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 16:39:36.335055    6320 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 16:39:36.335435    6320 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 16:39:36.376452    6320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:39:36.627157    6320 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 16:39:49.359193    6320 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.7312536s)
	I0416 16:39:49.369447    6320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0416 16:39:49.400300    6320 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0416 16:39:49.439416    6320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 16:39:49.472608    6320 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0416 16:39:49.660207    6320 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0416 16:39:49.845834    6320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:39:50.012564    6320 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0416 16:39:50.048491    6320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 16:39:50.081096    6320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:39:50.287712    6320 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0416 16:39:50.400021    6320 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0416 16:39:50.408751    6320 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0416 16:39:50.416961    6320 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0416 16:39:50.417255    6320 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0416 16:39:50.417255    6320 command_runner.go:130] > Device: 0,22	Inode: 1415        Links: 1
	I0416 16:39:50.417255    6320 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0416 16:39:50.417319    6320 command_runner.go:130] > Access: 2024-04-16 16:39:50.477519062 +0000
	I0416 16:39:50.417319    6320 command_runner.go:130] > Modify: 2024-04-16 16:39:50.477519062 +0000
	I0416 16:39:50.417319    6320 command_runner.go:130] > Change: 2024-04-16 16:39:50.481519347 +0000
	I0416 16:39:50.417319    6320 command_runner.go:130] >  Birth: -
	I0416 16:39:50.417386    6320 start.go:562] Will wait 60s for crictl version
	I0416 16:39:50.427185    6320 ssh_runner.go:195] Run: which crictl
	I0416 16:39:50.433877    6320 command_runner.go:130] > /usr/bin/crictl
	I0416 16:39:50.442526    6320 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 16:39:50.494575    6320 command_runner.go:130] > Version:  0.1.0
	I0416 16:39:50.494575    6320 command_runner.go:130] > RuntimeName:  docker
	I0416 16:39:50.494575    6320 command_runner.go:130] > RuntimeVersion:  26.0.1
	I0416 16:39:50.494575    6320 command_runner.go:130] > RuntimeApiVersion:  v1
	I0416 16:39:50.496055    6320 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0416 16:39:50.506720    6320 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 16:39:50.533687    6320 command_runner.go:130] > 26.0.1
	I0416 16:39:50.541664    6320 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 16:39:50.567671    6320 command_runner.go:130] > 26.0.1
	I0416 16:39:50.568667    6320 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
	I0416 16:39:50.568667    6320 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0416 16:39:50.571666    6320 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0416 16:39:50.571666    6320 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0416 16:39:50.571666    6320 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0416 16:39:50.571666    6320 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:6f:a4 Flags:up|broadcast|multicast|running}
	I0416 16:39:50.573666    6320 ip.go:210] interface addr: fe80::6b96:eca7:5afa:def5/64
	I0416 16:39:50.573666    6320 ip.go:210] interface addr: 172.19.80.1/20
	I0416 16:39:50.581666    6320 ssh_runner.go:195] Run: grep 172.19.80.1	host.minikube.internal$ /etc/hosts
	I0416 16:39:50.588213    6320 command_runner.go:130] > 172.19.80.1	host.minikube.internal
	I0416 16:39:50.588213    6320 kubeadm.go:877] updating cluster {Name:functional-538700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.29.3 ClusterName:functional-538700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.95.169 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 16:39:50.588213    6320 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:39:50.595978    6320 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 16:39:50.615924    6320 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.29.3
	I0416 16:39:50.615924    6320 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.29.3
	I0416 16:39:50.615924    6320 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.29.3
	I0416 16:39:50.615924    6320 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.29.3
	I0416 16:39:50.615924    6320 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0416 16:39:50.615924    6320 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0416 16:39:50.615924    6320 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0416 16:39:50.615924    6320 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 16:39:50.616943    6320 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0416 16:39:50.616943    6320 docker.go:615] Images already preloaded, skipping extraction
	I0416 16:39:50.621937    6320 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 16:39:50.640943    6320 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.29.3
	I0416 16:39:50.640943    6320 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.29.3
	I0416 16:39:50.640943    6320 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.29.3
	I0416 16:39:50.640943    6320 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.29.3
	I0416 16:39:50.640943    6320 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0416 16:39:50.640943    6320 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0416 16:39:50.640943    6320 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0416 16:39:50.640943    6320 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 16:39:50.641926    6320 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0416 16:39:50.641926    6320 cache_images.go:84] Images are preloaded, skipping loading
	I0416 16:39:50.641926    6320 kubeadm.go:928] updating node { 172.19.95.169 8441 v1.29.3 docker true true} ...
	I0416 16:39:50.641926    6320 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-538700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.95.169
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:functional-538700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 16:39:50.648927    6320 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0416 16:39:50.684154    6320 command_runner.go:130] > cgroupfs
	I0416 16:39:50.684324    6320 cni.go:84] Creating CNI manager for ""
	I0416 16:39:50.684843    6320 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0416 16:39:50.684843    6320 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 16:39:50.684899    6320 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.95.169 APIServerPort:8441 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-538700 NodeName:functional-538700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.95.169"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.95.169 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 16:39:50.685126    6320 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.95.169
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-538700"
	  kubeletExtraArgs:
	    node-ip: 172.19.95.169
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.95.169"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 16:39:50.692763    6320 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 16:39:50.709774    6320 command_runner.go:130] > kubeadm
	I0416 16:39:50.709774    6320 command_runner.go:130] > kubectl
	I0416 16:39:50.709774    6320 command_runner.go:130] > kubelet
	I0416 16:39:50.710200    6320 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 16:39:50.718143    6320 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 16:39:50.735282    6320 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0416 16:39:50.763883    6320 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 16:39:50.790911    6320 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0416 16:39:50.832783    6320 ssh_runner.go:195] Run: grep 172.19.95.169	control-plane.minikube.internal$ /etc/hosts
	I0416 16:39:50.838500    6320 command_runner.go:130] > 172.19.95.169	control-plane.minikube.internal
	I0416 16:39:50.847433    6320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:39:51.028681    6320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 16:39:51.055321    6320 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700 for IP: 172.19.95.169
	I0416 16:39:51.055321    6320 certs.go:194] generating shared ca certs ...
	I0416 16:39:51.055416    6320 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:39:51.056111    6320 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0416 16:39:51.056163    6320 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0416 16:39:51.056163    6320 certs.go:256] generating profile certs ...
	I0416 16:39:51.057411    6320 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.key
	I0416 16:39:51.057497    6320 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\apiserver.key.97f3b7c3
	I0416 16:39:51.058069    6320 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\proxy-client.key
	I0416 16:39:51.058069    6320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 16:39:51.058247    6320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0416 16:39:51.058466    6320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 16:39:51.058466    6320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 16:39:51.058466    6320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0416 16:39:51.058466    6320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0416 16:39:51.058993    6320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0416 16:39:51.059314    6320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0416 16:39:51.059445    6320 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem (1338 bytes)
	W0416 16:39:51.060246    6320 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324_empty.pem, impossibly tiny 0 bytes
	I0416 16:39:51.060340    6320 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0416 16:39:51.060387    6320 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0416 16:39:51.060387    6320 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0416 16:39:51.061103    6320 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0416 16:39:51.061243    6320 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem (1708 bytes)
	I0416 16:39:51.061769    6320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /usr/share/ca-certificates/53242.pem
	I0416 16:39:51.061909    6320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:39:51.062094    6320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem -> /usr/share/ca-certificates/5324.pem
	I0416 16:39:51.063163    6320 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 16:39:51.102477    6320 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 16:39:51.143814    6320 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 16:39:51.182903    6320 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 16:39:51.223260    6320 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0416 16:39:51.263105    6320 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 16:39:51.302088    6320 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 16:39:51.340025    6320 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 16:39:51.381681    6320 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /usr/share/ca-certificates/53242.pem (1708 bytes)
	I0416 16:39:51.420257    6320 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 16:39:51.456388    6320 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem --> /usr/share/ca-certificates/5324.pem (1338 bytes)
	I0416 16:39:51.494354    6320 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 16:39:51.538878    6320 ssh_runner.go:195] Run: openssl version
	I0416 16:39:51.547092    6320 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0416 16:39:51.557095    6320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/53242.pem && ln -fs /usr/share/ca-certificates/53242.pem /etc/ssl/certs/53242.pem"
	I0416 16:39:51.588705    6320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/53242.pem
	I0416 16:39:51.596602    6320 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 16 16:35 /usr/share/ca-certificates/53242.pem
	I0416 16:39:51.596602    6320 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:35 /usr/share/ca-certificates/53242.pem
	I0416 16:39:51.607125    6320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/53242.pem
	I0416 16:39:51.616411    6320 command_runner.go:130] > 3ec20f2e
	I0416 16:39:51.626643    6320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/53242.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 16:39:51.653180    6320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 16:39:51.682643    6320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:39:51.690137    6320 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 16 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:39:51.690137    6320 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:39:51.699053    6320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:39:51.707537    6320 command_runner.go:130] > b5213941
	I0416 16:39:51.717070    6320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 16:39:51.748553    6320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5324.pem && ln -fs /usr/share/ca-certificates/5324.pem /etc/ssl/certs/5324.pem"
	I0416 16:39:51.777420    6320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5324.pem
	I0416 16:39:51.784436    6320 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 16 16:35 /usr/share/ca-certificates/5324.pem
	I0416 16:39:51.784436    6320 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:35 /usr/share/ca-certificates/5324.pem
	I0416 16:39:51.792410    6320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5324.pem
	I0416 16:39:51.801057    6320 command_runner.go:130] > 51391683
	I0416 16:39:51.810585    6320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5324.pem /etc/ssl/certs/51391683.0"
	I0416 16:39:51.837681    6320 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 16:39:51.844046    6320 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 16:39:51.844046    6320 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0416 16:39:51.844046    6320 command_runner.go:130] > Device: 8,1	Inode: 1055022     Links: 1
	I0416 16:39:51.844046    6320 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0416 16:39:51.844046    6320 command_runner.go:130] > Access: 2024-04-16 16:37:53.907798233 +0000
	I0416 16:39:51.844046    6320 command_runner.go:130] > Modify: 2024-04-16 16:37:53.907798233 +0000
	I0416 16:39:51.844046    6320 command_runner.go:130] > Change: 2024-04-16 16:37:53.907798233 +0000
	I0416 16:39:51.844046    6320 command_runner.go:130] >  Birth: 2024-04-16 16:37:53.907798233 +0000
	I0416 16:39:51.856330    6320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 16:39:51.863759    6320 command_runner.go:130] > Certificate will not expire
	I0416 16:39:51.872476    6320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 16:39:51.880163    6320 command_runner.go:130] > Certificate will not expire
	I0416 16:39:51.889622    6320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 16:39:51.897879    6320 command_runner.go:130] > Certificate will not expire
	I0416 16:39:51.907514    6320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 16:39:51.915857    6320 command_runner.go:130] > Certificate will not expire
	I0416 16:39:51.925714    6320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 16:39:51.936761    6320 command_runner.go:130] > Certificate will not expire
	I0416 16:39:51.946862    6320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 16:39:51.955237    6320 command_runner.go:130] > Certificate will not expire
	I0416 16:39:51.955237    6320 kubeadm.go:391] StartCluster: {Name:functional-538700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
29.3 ClusterName:functional-538700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.95.169 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:39:51.963276    6320 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0416 16:39:51.992418    6320 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0416 16:39:52.010435    6320 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0416 16:39:52.010435    6320 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0416 16:39:52.010435    6320 command_runner.go:130] > /var/lib/minikube/etcd:
	I0416 16:39:52.010435    6320 command_runner.go:130] > member
	W0416 16:39:52.010435    6320 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 16:39:52.010435    6320 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 16:39:52.010435    6320 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 16:39:52.020489    6320 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 16:39:52.038146    6320 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 16:39:52.039323    6320 kubeconfig.go:125] found "functional-538700" server: "https://172.19.95.169:8441"
	I0416 16:39:52.040713    6320 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:39:52.040713    6320 kapi.go:59] client config for functional-538700: &rest.Config{Host:"https://172.19.95.169:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-538700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-538700\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef16c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 16:39:52.042700    6320 cert_rotation.go:137] Starting client certificate rotation controller
	I0416 16:39:52.051700    6320 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 16:39:52.068770    6320 kubeadm.go:624] The running cluster does not require reconfiguration: 172.19.95.169
	I0416 16:39:52.068850    6320 kubeadm.go:1154] stopping kube-system containers ...
	I0416 16:39:52.075140    6320 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0416 16:39:52.112765    6320 command_runner.go:130] > edb07b9be793
	I0416 16:39:52.112842    6320 command_runner.go:130] > 1738348bc4df
	I0416 16:39:52.112842    6320 command_runner.go:130] > ad194e71f1e9
	I0416 16:39:52.112842    6320 command_runner.go:130] > 56dacc3357ec
	I0416 16:39:52.112842    6320 command_runner.go:130] > 2375d686dc68
	I0416 16:39:52.112842    6320 command_runner.go:130] > 5b7eadad5679
	I0416 16:39:52.112842    6320 command_runner.go:130] > b1d710fcddde
	I0416 16:39:52.112842    6320 command_runner.go:130] > 6751872e7712
	I0416 16:39:52.112842    6320 command_runner.go:130] > cd15ff71bca2
	I0416 16:39:52.112842    6320 command_runner.go:130] > a7b557b7631e
	I0416 16:39:52.112842    6320 command_runner.go:130] > 72869fbe0a50
	I0416 16:39:52.112842    6320 command_runner.go:130] > d93d42b6482f
	I0416 16:39:52.112842    6320 command_runner.go:130] > 3e85edf411ea
	I0416 16:39:52.112842    6320 command_runner.go:130] > 32d5869cfc52
	I0416 16:39:52.112842    6320 docker.go:483] Stopping containers: [edb07b9be793 1738348bc4df ad194e71f1e9 56dacc3357ec 2375d686dc68 5b7eadad5679 b1d710fcddde 6751872e7712 cd15ff71bca2 a7b557b7631e 72869fbe0a50 d93d42b6482f 3e85edf411ea 32d5869cfc52]
	I0416 16:39:52.119407    6320 ssh_runner.go:195] Run: docker stop edb07b9be793 1738348bc4df ad194e71f1e9 56dacc3357ec 2375d686dc68 5b7eadad5679 b1d710fcddde 6751872e7712 cd15ff71bca2 a7b557b7631e 72869fbe0a50 d93d42b6482f 3e85edf411ea 32d5869cfc52
	I0416 16:39:52.141341    6320 command_runner.go:130] > edb07b9be793
	I0416 16:39:52.141341    6320 command_runner.go:130] > 1738348bc4df
	I0416 16:39:52.141341    6320 command_runner.go:130] > ad194e71f1e9
	I0416 16:39:52.141341    6320 command_runner.go:130] > 56dacc3357ec
	I0416 16:39:52.141341    6320 command_runner.go:130] > 2375d686dc68
	I0416 16:39:52.141341    6320 command_runner.go:130] > 5b7eadad5679
	I0416 16:39:52.141341    6320 command_runner.go:130] > b1d710fcddde
	I0416 16:39:52.141341    6320 command_runner.go:130] > 6751872e7712
	I0416 16:39:52.141341    6320 command_runner.go:130] > cd15ff71bca2
	I0416 16:39:52.141341    6320 command_runner.go:130] > a7b557b7631e
	I0416 16:39:52.141341    6320 command_runner.go:130] > 72869fbe0a50
	I0416 16:39:52.141341    6320 command_runner.go:130] > d93d42b6482f
	I0416 16:39:52.141341    6320 command_runner.go:130] > 3e85edf411ea
	I0416 16:39:52.141341    6320 command_runner.go:130] > 32d5869cfc52
	I0416 16:39:52.152270    6320 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 16:39:52.215120    6320 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 16:39:52.233141    6320 command_runner.go:130] > -rw------- 1 root root 5651 Apr 16 16:37 /etc/kubernetes/admin.conf
	I0416 16:39:52.233141    6320 command_runner.go:130] > -rw------- 1 root root 5657 Apr 16 16:37 /etc/kubernetes/controller-manager.conf
	I0416 16:39:52.233141    6320 command_runner.go:130] > -rw------- 1 root root 2007 Apr 16 16:38 /etc/kubernetes/kubelet.conf
	I0416 16:39:52.233141    6320 command_runner.go:130] > -rw------- 1 root root 5605 Apr 16 16:37 /etc/kubernetes/scheduler.conf
	I0416 16:39:52.233141    6320 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5651 Apr 16 16:37 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Apr 16 16:37 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Apr 16 16:38 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Apr 16 16:37 /etc/kubernetes/scheduler.conf
	
	I0416 16:39:52.244486    6320 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0416 16:39:52.260386    6320 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0416 16:39:52.274559    6320 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0416 16:39:52.292277    6320 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0416 16:39:52.303222    6320 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0416 16:39:52.318369    6320 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0416 16:39:52.327526    6320 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 16:39:52.352525    6320 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0416 16:39:52.368464    6320 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0416 16:39:52.377790    6320 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 16:39:52.406412    6320 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 16:39:52.423326    6320 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 16:39:52.499651    6320 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 16:39:52.499651    6320 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0416 16:39:52.499651    6320 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0416 16:39:52.499651    6320 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 16:39:52.499651    6320 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0416 16:39:52.499651    6320 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0416 16:39:52.499651    6320 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0416 16:39:52.499651    6320 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0416 16:39:52.499651    6320 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0416 16:39:52.499651    6320 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 16:39:52.499651    6320 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 16:39:52.499651    6320 command_runner.go:130] > [certs] Using the existing "sa" key
	I0416 16:39:52.499651    6320 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 16:39:54.066086    6320 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 16:39:54.066086    6320 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0416 16:39:54.066086    6320 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
	I0416 16:39:54.066086    6320 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0416 16:39:54.066086    6320 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 16:39:54.066086    6320 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 16:39:54.066086    6320 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.5663459s)
	I0416 16:39:54.066086    6320 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 16:39:54.338961    6320 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 16:39:54.339067    6320 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 16:39:54.339067    6320 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0416 16:39:54.339067    6320 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 16:39:54.410455    6320 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 16:39:54.410475    6320 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 16:39:54.410475    6320 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 16:39:54.410475    6320 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 16:39:54.410475    6320 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 16:39:54.503346    6320 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 16:39:54.503346    6320 api_server.go:52] waiting for apiserver process to appear ...
	I0416 16:39:54.517891    6320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 16:39:55.020622    6320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 16:39:55.518919    6320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 16:39:56.024925    6320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 16:39:56.044144    6320 command_runner.go:130] > 4778
	I0416 16:39:56.044523    6320 api_server.go:72] duration metric: took 1.5410896s to wait for apiserver process to appear ...
	I0416 16:39:56.044587    6320 api_server.go:88] waiting for apiserver healthz status ...
	I0416 16:39:56.044653    6320 api_server.go:253] Checking apiserver healthz at https://172.19.95.169:8441/healthz ...
	I0416 16:39:58.629123    6320 api_server.go:279] https://172.19.95.169:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 16:39:58.629831    6320 api_server.go:103] status: https://172.19.95.169:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 16:39:58.629831    6320 api_server.go:253] Checking apiserver healthz at https://172.19.95.169:8441/healthz ...
	I0416 16:39:58.650122    6320 api_server.go:279] https://172.19.95.169:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 16:39:58.650122    6320 api_server.go:103] status: https://172.19.95.169:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 16:39:59.059970    6320 api_server.go:253] Checking apiserver healthz at https://172.19.95.169:8441/healthz ...
	I0416 16:39:59.067627    6320 api_server.go:279] https://172.19.95.169:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 16:39:59.067627    6320 api_server.go:103] status: https://172.19.95.169:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 16:39:59.559586    6320 api_server.go:253] Checking apiserver healthz at https://172.19.95.169:8441/healthz ...
	I0416 16:39:59.567080    6320 api_server.go:279] https://172.19.95.169:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 16:39:59.567080    6320 api_server.go:103] status: https://172.19.95.169:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 16:40:00.054916    6320 api_server.go:253] Checking apiserver healthz at https://172.19.95.169:8441/healthz ...
	I0416 16:40:00.067589    6320 api_server.go:279] https://172.19.95.169:8441/healthz returned 200:
	ok
	I0416 16:40:00.067927    6320 round_trippers.go:463] GET https://172.19.95.169:8441/version
	I0416 16:40:00.067927    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:00.067927    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:00.067927    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:00.083790    6320 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0416 16:40:00.083790    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:00.083790    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:00.083790    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:00.083790    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:00.084042    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:00.084042    6320 round_trippers.go:580]     Content-Length: 263
	I0416 16:40:00.084042    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:00 GMT
	I0416 16:40:00.084042    6320 round_trippers.go:580]     Audit-Id: 677f8738-59ae-4e08-ba87-3710c6839d14
	I0416 16:40:00.084145    6320 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "29",
	  "gitVersion": "v1.29.3",
	  "gitCommit": "6813625b7cd706db5bc7388921be03071e1a492d",
	  "gitTreeState": "clean",
	  "buildDate": "2024-03-14T23:58:36Z",
	  "goVersion": "go1.21.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0416 16:40:00.084145    6320 api_server.go:141] control plane version: v1.29.3
	I0416 16:40:00.084145    6320 api_server.go:131] duration metric: took 4.0393291s to wait for apiserver health ...
	I0416 16:40:00.084145    6320 cni.go:84] Creating CNI manager for ""
	I0416 16:40:00.084145    6320 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0416 16:40:00.085209    6320 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 16:40:00.096753    6320 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 16:40:00.118039    6320 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 16:40:00.157615    6320 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 16:40:00.157999    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods
	I0416 16:40:00.158090    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:00.158090    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:00.158090    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:00.167876    6320 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0416 16:40:00.167876    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:00.167965    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:00 GMT
	I0416 16:40:00.167965    6320 round_trippers.go:580]     Audit-Id: 1cf14e50-73dc-444d-b678-f7318768e6c5
	I0416 16:40:00.167965    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:00.168076    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:00.168076    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:00.168076    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:00.168814    6320 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"497"},"items":[{"metadata":{"name":"coredns-76f75df574-s48fs","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"0594a7cf-8c07-464b-916b-37290f0328b7","resourceVersion":"495","creationTimestamp":"2024-04-16T16:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"f71c0d45-8b38-4759-86dc-12aedcccf8f2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f71c0d45-8b38-4759-86dc-12aedcccf8f2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51525 chars]
	I0416 16:40:00.173739    6320 system_pods.go:59] 7 kube-system pods found
	I0416 16:40:00.173739    6320 system_pods.go:61] "coredns-76f75df574-s48fs" [0594a7cf-8c07-464b-916b-37290f0328b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 16:40:00.173739    6320 system_pods.go:61] "etcd-functional-538700" [b998d2aa-a709-4f30-ad47-3ec27ce8774d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0416 16:40:00.173739    6320 system_pods.go:61] "kube-apiserver-functional-538700" [20d2bda4-fd6f-4316-8e79-79522df9a7d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0416 16:40:00.173739    6320 system_pods.go:61] "kube-controller-manager-functional-538700" [633942b9-3eee-4088-80fb-a6e12193048a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0416 16:40:00.173739    6320 system_pods.go:61] "kube-proxy-29dsg" [93b5000d-9b1b-4346-9f8d-73e52b42af0e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0416 16:40:00.173739    6320 system_pods.go:61] "kube-scheduler-functional-538700" [1c487e8d-bb63-4f07-a10f-bf8c2fbb4974] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0416 16:40:00.173739    6320 system_pods.go:61] "storage-provisioner" [2526ffa5-f4ff-4859-9389-2b1bde0ea350] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0416 16:40:00.173739    6320 system_pods.go:74] duration metric: took 15.9202ms to wait for pod list to return data ...
	I0416 16:40:00.173739    6320 node_conditions.go:102] verifying NodePressure condition ...
	I0416 16:40:00.173739    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes
	I0416 16:40:00.173739    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:00.173739    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:00.173739    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:00.177749    6320 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:40:00.177749    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:00.177749    6320 round_trippers.go:580]     Audit-Id: d74b068d-6c1a-4bc7-9c6f-f62ba30e3883
	I0416 16:40:00.177749    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:00.177749    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:00.177749    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:00.177749    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:00.177749    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:00 GMT
	I0416 16:40:00.178191    6320 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"497"},"items":[{"metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"m
anagedFields":[{"manager":"kubelet","operation":"Update","apiVersion":" [truncated 4847 chars]
	I0416 16:40:00.178939    6320 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 16:40:00.179097    6320 node_conditions.go:123] node cpu capacity is 2
	I0416 16:40:00.179176    6320 node_conditions.go:105] duration metric: took 5.4215ms to run NodePressure ...
	I0416 16:40:00.179176    6320 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 16:40:00.401577    6320 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0416 16:40:00.536585    6320 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0416 16:40:00.538242    6320 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0416 16:40:00.538454    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0416 16:40:00.538454    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:00.538454    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:00.538454    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:00.541592    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:00.541592    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:00.541592    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:00 GMT
	I0416 16:40:00.541592    6320 round_trippers.go:580]     Audit-Id: 3f425dd6-eb3a-4297-b237-c2e44443ba9f
	I0416 16:40:00.541592    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:00.541592    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:00.541592    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:00.541592    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:00.542581    6320 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"499"},"items":[{"metadata":{"name":"etcd-functional-538700","namespace":"kube-system","uid":"b998d2aa-a709-4f30-ad47-3ec27ce8774d","resourceVersion":"492","creationTimestamp":"2024-04-16T16:38:05Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.95.169:2379","kubernetes.io/config.hash":"d2892e2ccdcac15ef7d9b5d9e9b5179d","kubernetes.io/config.mirror":"d2892e2ccdcac15ef7d9b5d9e9b5179d","kubernetes.io/config.seen":"2024-04-16T16:38:05.018028412Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 30957 chars]
	I0416 16:40:00.544583    6320 kubeadm.go:733] kubelet initialised
	I0416 16:40:00.544583    6320 kubeadm.go:734] duration metric: took 6.2639ms waiting for restarted kubelet to initialise ...
	I0416 16:40:00.545580    6320 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 16:40:00.545580    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods
	I0416 16:40:00.545580    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:00.545580    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:00.545580    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:00.567314    6320 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0416 16:40:00.567400    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:00.567400    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:00 GMT
	I0416 16:40:00.567400    6320 round_trippers.go:580]     Audit-Id: c9574603-fe2d-4ef5-bfe9-a9d26844b39b
	I0416 16:40:00.567400    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:00.567400    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:00.567400    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:00.567400    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:00.568322    6320 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"499"},"items":[{"metadata":{"name":"coredns-76f75df574-s48fs","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"0594a7cf-8c07-464b-916b-37290f0328b7","resourceVersion":"495","creationTimestamp":"2024-04-16T16:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"f71c0d45-8b38-4759-86dc-12aedcccf8f2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f71c0d45-8b38-4759-86dc-12aedcccf8f2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51525 chars]
	I0416 16:40:00.570624    6320 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-s48fs" in "kube-system" namespace to be "Ready" ...
	I0416 16:40:00.570733    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-s48fs
	I0416 16:40:00.570801    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:00.570801    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:00.570801    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:00.580597    6320 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0416 16:40:00.580597    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:00.580597    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:00.580597    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:00.580597    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:00.580597    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:00.580597    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:00 GMT
	I0416 16:40:00.580597    6320 round_trippers.go:580]     Audit-Id: fd698755-288c-4bc1-962a-3ad04e9aff8e
	I0416 16:40:00.580597    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-s48fs","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"0594a7cf-8c07-464b-916b-37290f0328b7","resourceVersion":"495","creationTimestamp":"2024-04-16T16:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"f71c0d45-8b38-4759-86dc-12aedcccf8f2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f71c0d45-8b38-4759-86dc-12aedcccf8f2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6503 chars]
	I0416 16:40:00.581613    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:00.581613    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:00.581613    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:00.581613    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:00.585602    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:00.586128    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:00.586128    6320 round_trippers.go:580]     Audit-Id: ba4a9644-27be-41fa-8093-9fd8ffb04fbc
	I0416 16:40:00.586128    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:00.586128    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:00.586225    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:00.586225    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:00.586225    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:00 GMT
	I0416 16:40:00.586225    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:01.081197    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-s48fs
	I0416 16:40:01.081197    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:01.081197    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:01.081197    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:01.085241    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:01.085241    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:01.085241    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:01.085241    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:01.085241    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:01.085241    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:01.085241    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:01 GMT
	I0416 16:40:01.085241    6320 round_trippers.go:580]     Audit-Id: 1d8b4944-b5bf-4036-9477-be2935897438
	I0416 16:40:01.085241    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-s48fs","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"0594a7cf-8c07-464b-916b-37290f0328b7","resourceVersion":"495","creationTimestamp":"2024-04-16T16:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"f71c0d45-8b38-4759-86dc-12aedcccf8f2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f71c0d45-8b38-4759-86dc-12aedcccf8f2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6503 chars]
	I0416 16:40:01.086045    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:01.086128    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:01.086128    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:01.086128    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:01.088839    6320 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:40:01.088839    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:01.088839    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:01.088839    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:01.088839    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:01.088839    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:01.089299    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:01 GMT
	I0416 16:40:01.089299    6320 round_trippers.go:580]     Audit-Id: b25c3705-7913-4e32-bf65-10776f40e247
	I0416 16:40:01.089934    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:01.584066    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-s48fs
	I0416 16:40:01.584066    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:01.584066    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:01.584066    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:01.592411    6320 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0416 16:40:01.592411    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:01.592411    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:01 GMT
	I0416 16:40:01.592411    6320 round_trippers.go:580]     Audit-Id: 73acd8be-2f31-4e2b-b236-1bfd0348b778
	I0416 16:40:01.592411    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:01.592411    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:01.592411    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:01.592411    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:01.593428    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-s48fs","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"0594a7cf-8c07-464b-916b-37290f0328b7","resourceVersion":"495","creationTimestamp":"2024-04-16T16:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"f71c0d45-8b38-4759-86dc-12aedcccf8f2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f71c0d45-8b38-4759-86dc-12aedcccf8f2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6503 chars]
	I0416 16:40:01.594231    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:01.594261    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:01.594261    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:01.594261    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:01.606913    6320 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0416 16:40:01.606913    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:01.606913    6320 round_trippers.go:580]     Audit-Id: 0bbbf16b-1b90-4f0b-83a2-32fd8ee84faa
	I0416 16:40:01.606913    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:01.606913    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:01.606913    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:01.606913    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:01.606913    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:01 GMT
	I0416 16:40:01.607346    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:02.076612    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-s48fs
	I0416 16:40:02.076861    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:02.076861    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:02.076861    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:02.081266    6320 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:40:02.081266    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:02.082049    6320 round_trippers.go:580]     Audit-Id: 31b85fea-1272-4f4c-b343-7c68f986db26
	I0416 16:40:02.082049    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:02.082049    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:02.082049    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:02.082049    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:02.082049    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:02 GMT
	I0416 16:40:02.082248    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-s48fs","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"0594a7cf-8c07-464b-916b-37290f0328b7","resourceVersion":"519","creationTimestamp":"2024-04-16T16:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"f71c0d45-8b38-4759-86dc-12aedcccf8f2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f71c0d45-8b38-4759-86dc-12aedcccf8f2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0416 16:40:02.083163    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:02.083163    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:02.083163    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:02.083163    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:02.087766    6320 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:40:02.087766    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:02.087830    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:02.087830    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:02 GMT
	I0416 16:40:02.087830    6320 round_trippers.go:580]     Audit-Id: af4f87dc-4df8-4a69-85c2-ff13e1d2e811
	I0416 16:40:02.087830    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:02.087830    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:02.087830    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:02.088112    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:02.572656    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-s48fs
	I0416 16:40:02.572656    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:02.572656    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:02.572656    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:02.576363    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:02.577223    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:02.577223    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:02.577223    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:02 GMT
	I0416 16:40:02.577223    6320 round_trippers.go:580]     Audit-Id: 21aa910b-450e-476a-86eb-76c35f095ab4
	I0416 16:40:02.577223    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:02.577223    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:02.577223    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:02.577506    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-s48fs","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"0594a7cf-8c07-464b-916b-37290f0328b7","resourceVersion":"519","creationTimestamp":"2024-04-16T16:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"f71c0d45-8b38-4759-86dc-12aedcccf8f2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f71c0d45-8b38-4759-86dc-12aedcccf8f2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0416 16:40:02.577731    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:02.577731    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:02.577731    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:02.577731    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:02.581422    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:02.582148    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:02.582148    6320 round_trippers.go:580]     Audit-Id: c8be206b-4c5e-4a9e-815e-326972b7e585
	I0416 16:40:02.582148    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:02.582148    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:02.582148    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:02.582148    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:02.582148    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:02 GMT
	I0416 16:40:02.582148    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:02.582803    6320 pod_ready.go:102] pod "coredns-76f75df574-s48fs" in "kube-system" namespace has status "Ready":"False"
	I0416 16:40:03.086037    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-s48fs
	I0416 16:40:03.086262    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:03.086356    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:03.086356    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:03.089710    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:03.089710    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:03.089710    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:03.089710    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:03.089710    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:03 GMT
	I0416 16:40:03.089710    6320 round_trippers.go:580]     Audit-Id: 3313c6d6-7815-4927-a4fd-5cad7fdb51ce
	I0416 16:40:03.089710    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:03.089710    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:03.089710    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-s48fs","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"0594a7cf-8c07-464b-916b-37290f0328b7","resourceVersion":"519","creationTimestamp":"2024-04-16T16:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"f71c0d45-8b38-4759-86dc-12aedcccf8f2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f71c0d45-8b38-4759-86dc-12aedcccf8f2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0416 16:40:03.090706    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:03.090706    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:03.090706    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:03.090706    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:03.094771    6320 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:40:03.094771    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:03.094771    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:03.094771    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:03 GMT
	I0416 16:40:03.094771    6320 round_trippers.go:580]     Audit-Id: a9fe24c0-a635-455c-8462-eb1cdeb25b4e
	I0416 16:40:03.094771    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:03.094771    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:03.095145    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:03.095145    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:03.572098    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-s48fs
	I0416 16:40:03.572098    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:03.572098    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:03.572098    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:03.576663    6320 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:40:03.576663    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:03.576899    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:03.576899    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:03.576899    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:03.576948    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:03 GMT
	I0416 16:40:03.576948    6320 round_trippers.go:580]     Audit-Id: 1dd42dfe-bf05-43d6-a891-082d06e29066
	I0416 16:40:03.576948    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:03.576948    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-s48fs","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"0594a7cf-8c07-464b-916b-37290f0328b7","resourceVersion":"519","creationTimestamp":"2024-04-16T16:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"f71c0d45-8b38-4759-86dc-12aedcccf8f2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f71c0d45-8b38-4759-86dc-12aedcccf8f2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0416 16:40:03.578331    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:03.578420    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:03.578420    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:03.578515    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:03.582070    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:03.582070    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:03.582070    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:03.582159    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:03.582159    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:03.582159    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:03.582159    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:03 GMT
	I0416 16:40:03.582159    6320 round_trippers.go:580]     Audit-Id: 82a2c810-a043-446f-9f03-5220d77af6b1
	I0416 16:40:03.582449    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:04.082882    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-s48fs
	I0416 16:40:04.082882    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:04.082882    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:04.082882    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:04.087110    6320 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:40:04.087110    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:04.087110    6320 round_trippers.go:580]     Audit-Id: 42bc08b4-a392-4ea5-8a28-e392b0b1cbcd
	I0416 16:40:04.087110    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:04.087110    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:04.087110    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:04.087110    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:04.087110    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:04 GMT
	I0416 16:40:04.087110    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-s48fs","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"0594a7cf-8c07-464b-916b-37290f0328b7","resourceVersion":"519","creationTimestamp":"2024-04-16T16:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"f71c0d45-8b38-4759-86dc-12aedcccf8f2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f71c0d45-8b38-4759-86dc-12aedcccf8f2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0416 16:40:04.088250    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:04.088250    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:04.088250    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:04.088250    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:04.091503    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:04.092556    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:04.092556    6320 round_trippers.go:580]     Audit-Id: 501f448b-76ea-4f49-a406-7da07b26b30a
	I0416 16:40:04.092556    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:04.092556    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:04.092556    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:04.092556    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:04.092556    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:04 GMT
	I0416 16:40:04.092826    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:04.585861    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-s48fs
	I0416 16:40:04.585988    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:04.585988    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:04.585988    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:04.589927    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:04.590000    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:04.590000    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:04 GMT
	I0416 16:40:04.590000    6320 round_trippers.go:580]     Audit-Id: c96a7a83-d643-499e-aec0-4096ae5320dd
	I0416 16:40:04.590000    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:04.590000    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:04.590000    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:04.590000    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:04.590000    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-s48fs","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"0594a7cf-8c07-464b-916b-37290f0328b7","resourceVersion":"519","creationTimestamp":"2024-04-16T16:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"f71c0d45-8b38-4759-86dc-12aedcccf8f2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f71c0d45-8b38-4759-86dc-12aedcccf8f2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0416 16:40:04.591170    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:04.591170    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:04.591170    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:04.591254    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:04.593563    6320 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:40:04.593563    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:04.593563    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:04.593563    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:04.593563    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:04.593563    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:04 GMT
	I0416 16:40:04.593563    6320 round_trippers.go:580]     Audit-Id: f35de026-0533-4a11-b340-bf2dbdfbc0a5
	I0416 16:40:04.593563    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:04.594568    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:04.595012    6320 pod_ready.go:102] pod "coredns-76f75df574-s48fs" in "kube-system" namespace has status "Ready":"False"
	I0416 16:40:05.086260    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-s48fs
	I0416 16:40:05.086260    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:05.086260    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:05.086260    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:05.091810    6320 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:40:05.091810    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:05.091810    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:05 GMT
	I0416 16:40:05.091810    6320 round_trippers.go:580]     Audit-Id: 11762e1e-3bf1-4d56-ba07-31c490ab34cb
	I0416 16:40:05.091810    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:05.091810    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:05.091891    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:05.091891    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:05.091962    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-s48fs","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"0594a7cf-8c07-464b-916b-37290f0328b7","resourceVersion":"519","creationTimestamp":"2024-04-16T16:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"f71c0d45-8b38-4759-86dc-12aedcccf8f2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f71c0d45-8b38-4759-86dc-12aedcccf8f2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0416 16:40:05.092728    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:05.092800    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:05.092800    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:05.092800    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:05.099212    6320 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 16:40:05.099252    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:05.099252    6320 round_trippers.go:580]     Audit-Id: e3811597-9cc6-47b5-b952-379e67a3ce8d
	I0416 16:40:05.099301    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:05.099301    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:05.099301    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:05.099301    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:05.099301    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:05 GMT
	I0416 16:40:05.099301    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:05.571836    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-s48fs
	I0416 16:40:05.571909    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:05.571978    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:05.571978    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:05.577074    6320 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 16:40:05.577171    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:05.577171    6320 round_trippers.go:580]     Audit-Id: 16771648-1e8f-4202-a7c0-b57b079ea477
	I0416 16:40:05.577171    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:05.577171    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:05.577171    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:05.577171    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:05.577171    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:05 GMT
	I0416 16:40:05.577570    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-s48fs","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"0594a7cf-8c07-464b-916b-37290f0328b7","resourceVersion":"519","creationTimestamp":"2024-04-16T16:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"f71c0d45-8b38-4759-86dc-12aedcccf8f2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f71c0d45-8b38-4759-86dc-12aedcccf8f2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0416 16:40:05.578673    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:05.578747    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:05.578747    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:05.578747    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:05.582617    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:05.582617    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:05.582617    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:05.582617    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:05.582617    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:05.582617    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:05.582617    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:05 GMT
	I0416 16:40:05.582617    6320 round_trippers.go:580]     Audit-Id: 7532bb31-58a0-42d4-9d89-162adf7e9bbc
	I0416 16:40:05.582617    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:06.073327    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-s48fs
	I0416 16:40:06.073327    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:06.073327    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:06.073327    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:06.078332    6320 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 16:40:06.078332    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:06.078332    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:06.078332    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:06.078332    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:06.078332    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:06 GMT
	I0416 16:40:06.078332    6320 round_trippers.go:580]     Audit-Id: 48c585a7-5ab3-4a25-b013-705cb3485923
	I0416 16:40:06.078332    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:06.078866    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-s48fs","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"0594a7cf-8c07-464b-916b-37290f0328b7","resourceVersion":"519","creationTimestamp":"2024-04-16T16:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"f71c0d45-8b38-4759-86dc-12aedcccf8f2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f71c0d45-8b38-4759-86dc-12aedcccf8f2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0416 16:40:06.080007    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:06.080093    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:06.080093    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:06.080093    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:06.083394    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:06.084057    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:06.084057    6320 round_trippers.go:580]     Audit-Id: d1e74dce-3d54-457b-8b7d-8b042543d2c0
	I0416 16:40:06.084057    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:06.084057    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:06.084057    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:06.084057    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:06.084057    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:06 GMT
	I0416 16:40:06.084498    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:06.574420    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-s48fs
	I0416 16:40:06.574765    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:06.574824    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:06.574824    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:06.578611    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:06.578844    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:06.578844    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:06 GMT
	I0416 16:40:06.578844    6320 round_trippers.go:580]     Audit-Id: d95cf83e-80bc-40fc-a8c7-2d4d85d97d3f
	I0416 16:40:06.578844    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:06.578844    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:06.578844    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:06.578844    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:06.579086    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-s48fs","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"0594a7cf-8c07-464b-916b-37290f0328b7","resourceVersion":"519","creationTimestamp":"2024-04-16T16:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"f71c0d45-8b38-4759-86dc-12aedcccf8f2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f71c0d45-8b38-4759-86dc-12aedcccf8f2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0416 16:40:06.579749    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:06.579749    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:06.579828    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:06.579828    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:06.582916    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:06.582916    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:06.582916    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:06.582916    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:06.582916    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:06.582916    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:06.582916    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:06 GMT
	I0416 16:40:06.583249    6320 round_trippers.go:580]     Audit-Id: 5f1cf086-3284-4b1a-9890-d913ea7ea964
	I0416 16:40:06.583602    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:07.086483    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-s48fs
	I0416 16:40:07.086775    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:07.086775    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:07.086939    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:07.093221    6320 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 16:40:07.093221    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:07.093221    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:07.093221    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:07.093221    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:07.093221    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:07.093221    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:07 GMT
	I0416 16:40:07.093221    6320 round_trippers.go:580]     Audit-Id: 4819bd4b-7072-491e-8256-0255c54285cd
	I0416 16:40:07.093896    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-s48fs","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"0594a7cf-8c07-464b-916b-37290f0328b7","resourceVersion":"519","creationTimestamp":"2024-04-16T16:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"f71c0d45-8b38-4759-86dc-12aedcccf8f2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f71c0d45-8b38-4759-86dc-12aedcccf8f2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0416 16:40:07.094655    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:07.094655    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:07.094655    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:07.094655    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:07.098247    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:07.098510    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:07.098510    6320 round_trippers.go:580]     Audit-Id: 6b960fcd-d34b-459d-9c02-4ca1f1298eb9
	I0416 16:40:07.098591    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:07.098625    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:07.098625    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:07.098625    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:07.098681    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:07 GMT
	I0416 16:40:07.098999    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:07.099368    6320 pod_ready.go:102] pod "coredns-76f75df574-s48fs" in "kube-system" namespace has status "Ready":"False"
	I0416 16:40:07.585317    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-s48fs
	I0416 16:40:07.585438    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:07.585438    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:07.585438    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:07.590765    6320 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 16:40:07.590765    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:07.590765    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:07.590765    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:07.590765    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:07.590765    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:07.590765    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:07 GMT
	I0416 16:40:07.590765    6320 round_trippers.go:580]     Audit-Id: caceed9a-61da-4375-88a6-8ae4d7181b98
	I0416 16:40:07.591063    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-s48fs","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"0594a7cf-8c07-464b-916b-37290f0328b7","resourceVersion":"519","creationTimestamp":"2024-04-16T16:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"f71c0d45-8b38-4759-86dc-12aedcccf8f2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f71c0d45-8b38-4759-86dc-12aedcccf8f2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0416 16:40:07.591799    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:07.591799    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:07.591799    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:07.591799    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:07.597379    6320 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 16:40:07.597379    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:07.597379    6320 round_trippers.go:580]     Audit-Id: 9d3e3536-60a5-433f-a813-75f872522e7c
	I0416 16:40:07.597379    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:07.597379    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:07.597379    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:07.597379    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:07.597379    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:07 GMT
	I0416 16:40:07.599993    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:08.083628    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-s48fs
	I0416 16:40:08.083628    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:08.083628    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:08.083628    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:08.087683    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:08.087683    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:08.087683    6320 round_trippers.go:580]     Audit-Id: 336cdd0e-8447-4c09-b96c-89ef62eb079e
	I0416 16:40:08.087683    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:08.087683    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:08.087683    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:08.087683    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:08.087683    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:08 GMT
	I0416 16:40:08.087964    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-s48fs","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"0594a7cf-8c07-464b-916b-37290f0328b7","resourceVersion":"519","creationTimestamp":"2024-04-16T16:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"f71c0d45-8b38-4759-86dc-12aedcccf8f2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f71c0d45-8b38-4759-86dc-12aedcccf8f2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0416 16:40:08.088711    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:08.088711    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:08.088711    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:08.088711    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:08.094082    6320 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 16:40:08.094082    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:08.094082    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:08.094082    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:08 GMT
	I0416 16:40:08.094082    6320 round_trippers.go:580]     Audit-Id: a0b8f585-459d-497b-94a7-1f5252ca1b84
	I0416 16:40:08.094082    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:08.094082    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:08.094082    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:08.094679    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:08.582869    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-s48fs
	I0416 16:40:08.582949    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:08.582949    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:08.582949    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:08.586498    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:08.586498    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:08.586498    6320 round_trippers.go:580]     Audit-Id: dd6c2c57-3340-4ca9-8973-ebb62f26842e
	I0416 16:40:08.586498    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:08.586498    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:08.586498    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:08.586498    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:08.586498    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:08 GMT
	I0416 16:40:08.586498    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-s48fs","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"0594a7cf-8c07-464b-916b-37290f0328b7","resourceVersion":"519","creationTimestamp":"2024-04-16T16:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"f71c0d45-8b38-4759-86dc-12aedcccf8f2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f71c0d45-8b38-4759-86dc-12aedcccf8f2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0416 16:40:08.587598    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:08.587665    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:08.587665    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:08.587665    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:08.592899    6320 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 16:40:08.592899    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:08.592899    6320 round_trippers.go:580]     Audit-Id: 24c0fa92-c9aa-46f3-82d6-26c1e5abd54a
	I0416 16:40:08.592899    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:08.592899    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:08.592899    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:08.592899    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:08.592899    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:08 GMT
	I0416 16:40:08.592899    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:09.081531    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-s48fs
	I0416 16:40:09.081585    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:09.081638    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:09.081638    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:09.088082    6320 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 16:40:09.088082    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:09.088343    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:09.088343    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:09 GMT
	I0416 16:40:09.088343    6320 round_trippers.go:580]     Audit-Id: 9b885e5b-bdc5-4405-a35b-84a54ff808d9
	I0416 16:40:09.088343    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:09.088343    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:09.088343    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:09.088747    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-s48fs","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"0594a7cf-8c07-464b-916b-37290f0328b7","resourceVersion":"519","creationTimestamp":"2024-04-16T16:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"f71c0d45-8b38-4759-86dc-12aedcccf8f2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f71c0d45-8b38-4759-86dc-12aedcccf8f2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0416 16:40:09.089637    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:09.089637    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:09.089709    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:09.089739    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:09.092032    6320 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:40:09.092032    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:09.092032    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:09.092032    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:09.092032    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:09 GMT
	I0416 16:40:09.092032    6320 round_trippers.go:580]     Audit-Id: 82f900c6-590c-4e1f-ba3a-935dafbdf4c7
	I0416 16:40:09.092032    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:09.092032    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:09.092032    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:09.580146    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-s48fs
	I0416 16:40:09.580146    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:09.580648    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:09.580648    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:09.588572    6320 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0416 16:40:09.588572    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:09.588572    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:09.588572    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:09.588572    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:09.588572    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:09.588572    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:09 GMT
	I0416 16:40:09.588572    6320 round_trippers.go:580]     Audit-Id: 0f9eff8b-d43f-4b44-ae9c-c4666551e5a0
	I0416 16:40:09.589558    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-s48fs","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"0594a7cf-8c07-464b-916b-37290f0328b7","resourceVersion":"562","creationTimestamp":"2024-04-16T16:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"f71c0d45-8b38-4759-86dc-12aedcccf8f2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f71c0d45-8b38-4759-86dc-12aedcccf8f2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6450 chars]
	I0416 16:40:09.590429    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:09.590429    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:09.590512    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:09.590512    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:09.593291    6320 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:40:09.593291    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:09.593291    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:09.593291    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:09 GMT
	I0416 16:40:09.593291    6320 round_trippers.go:580]     Audit-Id: 4a396048-cef0-44c0-a81e-d0100bf56261
	I0416 16:40:09.593291    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:09.593291    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:09.593291    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:09.594176    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:09.594176    6320 pod_ready.go:92] pod "coredns-76f75df574-s48fs" in "kube-system" namespace has status "Ready":"True"
	I0416 16:40:09.594176    6320 pod_ready.go:81] duration metric: took 9.0229868s for pod "coredns-76f75df574-s48fs" in "kube-system" namespace to be "Ready" ...
	I0416 16:40:09.594176    6320 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-538700" in "kube-system" namespace to be "Ready" ...
	I0416 16:40:09.594709    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/etcd-functional-538700
	I0416 16:40:09.594777    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:09.594777    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:09.594777    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:09.596967    6320 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:40:09.596967    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:09.596967    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:09.596967    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:09 GMT
	I0416 16:40:09.596967    6320 round_trippers.go:580]     Audit-Id: acb6ccde-1b18-43a0-b37c-0f6e5221aea6
	I0416 16:40:09.596967    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:09.596967    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:09.596967    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:09.597944    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-538700","namespace":"kube-system","uid":"b998d2aa-a709-4f30-ad47-3ec27ce8774d","resourceVersion":"492","creationTimestamp":"2024-04-16T16:38:05Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.95.169:2379","kubernetes.io/config.hash":"d2892e2ccdcac15ef7d9b5d9e9b5179d","kubernetes.io/config.mirror":"d2892e2ccdcac15ef7d9b5d9e9b5179d","kubernetes.io/config.seen":"2024-04-16T16:38:05.018028412Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6597 chars]
	I0416 16:40:09.597944    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:09.597944    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:09.597944    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:09.597944    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:09.600424    6320 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:40:09.601592    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:09.601592    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:09.601592    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:09.601592    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:09 GMT
	I0416 16:40:09.601645    6320 round_trippers.go:580]     Audit-Id: 8d7c5bba-92b5-430c-90ce-90fac305be16
	I0416 16:40:09.601645    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:09.601645    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:09.601750    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:10.107420    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/etcd-functional-538700
	I0416 16:40:10.107420    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:10.107420    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:10.107420    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:10.111905    6320 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:40:10.111905    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:10.111905    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:10.111905    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:10.111905    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:10 GMT
	I0416 16:40:10.111905    6320 round_trippers.go:580]     Audit-Id: 3261e611-6b9b-4688-9db5-1bfa5e65ab40
	I0416 16:40:10.111905    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:10.111905    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:10.112376    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-538700","namespace":"kube-system","uid":"b998d2aa-a709-4f30-ad47-3ec27ce8774d","resourceVersion":"492","creationTimestamp":"2024-04-16T16:38:05Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.95.169:2379","kubernetes.io/config.hash":"d2892e2ccdcac15ef7d9b5d9e9b5179d","kubernetes.io/config.mirror":"d2892e2ccdcac15ef7d9b5d9e9b5179d","kubernetes.io/config.seen":"2024-04-16T16:38:05.018028412Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6597 chars]
	I0416 16:40:10.113187    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:10.113274    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:10.113274    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:10.113274    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:10.116371    6320 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:40:10.116371    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:10.116371    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:10.116371    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:10.116371    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:10.116371    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:10.116371    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:10 GMT
	I0416 16:40:10.116371    6320 round_trippers.go:580]     Audit-Id: 9d98e530-17bb-419c-88b3-b224a8b9e962
	I0416 16:40:10.116662    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:10.606346    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/etcd-functional-538700
	I0416 16:40:10.606346    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:10.606346    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:10.606346    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:10.610856    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:10.610856    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:10.610856    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:10.610856    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:10.610856    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:10 GMT
	I0416 16:40:10.610856    6320 round_trippers.go:580]     Audit-Id: 1b964b74-4ad2-49d1-96bf-5eb7af5e17ee
	I0416 16:40:10.610856    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:10.610856    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:10.610856    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-538700","namespace":"kube-system","uid":"b998d2aa-a709-4f30-ad47-3ec27ce8774d","resourceVersion":"492","creationTimestamp":"2024-04-16T16:38:05Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.95.169:2379","kubernetes.io/config.hash":"d2892e2ccdcac15ef7d9b5d9e9b5179d","kubernetes.io/config.mirror":"d2892e2ccdcac15ef7d9b5d9e9b5179d","kubernetes.io/config.seen":"2024-04-16T16:38:05.018028412Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6597 chars]
	I0416 16:40:10.611801    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:10.611801    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:10.611864    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:10.611864    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:10.615431    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:10.615431    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:10.615431    6320 round_trippers.go:580]     Audit-Id: 70373112-f916-4b2d-9b5f-446011b938dd
	I0416 16:40:10.615431    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:10.615431    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:10.615431    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:10.615431    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:10.615431    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:10 GMT
	I0416 16:40:10.615431    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:11.106136    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/etcd-functional-538700
	I0416 16:40:11.106136    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:11.106136    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:11.106657    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:11.110086    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:11.110086    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:11.110086    6320 round_trippers.go:580]     Audit-Id: 6b962f85-72ad-4edb-b642-4ba2d3abc9e7
	I0416 16:40:11.110086    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:11.110086    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:11.110086    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:11.110086    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:11.110086    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:11 GMT
	I0416 16:40:11.110086    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-538700","namespace":"kube-system","uid":"b998d2aa-a709-4f30-ad47-3ec27ce8774d","resourceVersion":"492","creationTimestamp":"2024-04-16T16:38:05Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.95.169:2379","kubernetes.io/config.hash":"d2892e2ccdcac15ef7d9b5d9e9b5179d","kubernetes.io/config.mirror":"d2892e2ccdcac15ef7d9b5d9e9b5179d","kubernetes.io/config.seen":"2024-04-16T16:38:05.018028412Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6597 chars]
	I0416 16:40:11.111443    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:11.111520    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:11.111520    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:11.111520    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:11.113941    6320 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:40:11.113941    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:11.113941    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:11.113941    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:11.113941    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:11 GMT
	I0416 16:40:11.113941    6320 round_trippers.go:580]     Audit-Id: d92d96aa-3508-43a9-b3d2-c132b1c37321
	I0416 16:40:11.113941    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:11.114937    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:11.115191    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:11.609165    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/etcd-functional-538700
	I0416 16:40:11.609165    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:11.609165    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:11.609165    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:11.613962    6320 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:40:11.613962    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:11.613962    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:11.613962    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:11.614084    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:11.614084    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:11 GMT
	I0416 16:40:11.614084    6320 round_trippers.go:580]     Audit-Id: 5456a81b-afaa-46f6-ab44-afb1d0f9c1ef
	I0416 16:40:11.614084    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:11.614496    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-538700","namespace":"kube-system","uid":"b998d2aa-a709-4f30-ad47-3ec27ce8774d","resourceVersion":"492","creationTimestamp":"2024-04-16T16:38:05Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.95.169:2379","kubernetes.io/config.hash":"d2892e2ccdcac15ef7d9b5d9e9b5179d","kubernetes.io/config.mirror":"d2892e2ccdcac15ef7d9b5d9e9b5179d","kubernetes.io/config.seen":"2024-04-16T16:38:05.018028412Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6597 chars]
	I0416 16:40:11.615261    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:11.615261    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:11.615804    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:11.615804    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:11.626202    6320 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0416 16:40:11.626202    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:11.626202    6320 round_trippers.go:580]     Audit-Id: 2ffa8247-9355-402d-bbdf-4a0650f96d57
	I0416 16:40:11.626202    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:11.626202    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:11.626202    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:11.626202    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:11.626202    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:11 GMT
	I0416 16:40:11.626841    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:11.626841    6320 pod_ready.go:102] pod "etcd-functional-538700" in "kube-system" namespace has status "Ready":"False"
	I0416 16:40:12.107239    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/etcd-functional-538700
	I0416 16:40:12.107239    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:12.107239    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:12.107239    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:12.111532    6320 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:40:12.111532    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:12.111532    6320 round_trippers.go:580]     Audit-Id: 967e0579-abc9-4bd7-a363-4a2660a733a6
	I0416 16:40:12.111532    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:12.111532    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:12.111532    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:12.111532    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:12.111532    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:12 GMT
	I0416 16:40:12.112176    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-538700","namespace":"kube-system","uid":"b998d2aa-a709-4f30-ad47-3ec27ce8774d","resourceVersion":"492","creationTimestamp":"2024-04-16T16:38:05Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.95.169:2379","kubernetes.io/config.hash":"d2892e2ccdcac15ef7d9b5d9e9b5179d","kubernetes.io/config.mirror":"d2892e2ccdcac15ef7d9b5d9e9b5179d","kubernetes.io/config.seen":"2024-04-16T16:38:05.018028412Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6597 chars]
	I0416 16:40:12.113100    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:12.113100    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:12.113180    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:12.113180    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:12.116142    6320 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:40:12.116142    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:12.116142    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:12.116142    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:12 GMT
	I0416 16:40:12.116142    6320 round_trippers.go:580]     Audit-Id: 6034ea3a-30c0-4af1-9d0a-b4dd4d3b2e49
	I0416 16:40:12.116142    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:12.116142    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:12.116142    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:12.116142    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:12.602269    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/etcd-functional-538700
	I0416 16:40:12.602595    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:12.602595    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:12.602595    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:12.608023    6320 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:40:12.608023    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:12.608023    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:12 GMT
	I0416 16:40:12.608023    6320 round_trippers.go:580]     Audit-Id: 001e774d-34ab-4edf-bc1d-581f7b2e375f
	I0416 16:40:12.608131    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:12.608131    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:12.608131    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:12.608131    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:12.608407    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-538700","namespace":"kube-system","uid":"b998d2aa-a709-4f30-ad47-3ec27ce8774d","resourceVersion":"492","creationTimestamp":"2024-04-16T16:38:05Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.95.169:2379","kubernetes.io/config.hash":"d2892e2ccdcac15ef7d9b5d9e9b5179d","kubernetes.io/config.mirror":"d2892e2ccdcac15ef7d9b5d9e9b5179d","kubernetes.io/config.seen":"2024-04-16T16:38:05.018028412Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6597 chars]
	I0416 16:40:12.609293    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:12.609293    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:12.609372    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:12.609372    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:12.614257    6320 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:40:12.614257    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:12.614257    6320 round_trippers.go:580]     Audit-Id: 215da4aa-1f02-4487-94d1-5acdcd1d041e
	I0416 16:40:12.614257    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:12.614257    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:12.614257    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:12.615212    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:12.615212    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:12 GMT
	I0416 16:40:12.615302    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:13.102745    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/etcd-functional-538700
	I0416 16:40:13.102745    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:13.102745    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:13.102745    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:13.106776    6320 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:40:13.106776    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:13.106776    6320 round_trippers.go:580]     Audit-Id: fa69f4d3-e074-4bdc-aa77-68ae484cc227
	I0416 16:40:13.106776    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:13.106776    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:13.106776    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:13.106776    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:13.106776    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:13 GMT
	I0416 16:40:13.106776    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-538700","namespace":"kube-system","uid":"b998d2aa-a709-4f30-ad47-3ec27ce8774d","resourceVersion":"492","creationTimestamp":"2024-04-16T16:38:05Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.95.169:2379","kubernetes.io/config.hash":"d2892e2ccdcac15ef7d9b5d9e9b5179d","kubernetes.io/config.mirror":"d2892e2ccdcac15ef7d9b5d9e9b5179d","kubernetes.io/config.seen":"2024-04-16T16:38:05.018028412Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6597 chars]
	I0416 16:40:13.108087    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:13.108087    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:13.108149    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:13.108149    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:13.111186    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:13.111186    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:13.111186    6320 round_trippers.go:580]     Audit-Id: b9da363c-dfac-4655-82d4-74069340889d
	I0416 16:40:13.111186    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:13.111186    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:13.111186    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:13.111186    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:13.111186    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:13 GMT
	I0416 16:40:13.111186    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:13.601924    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/etcd-functional-538700
	I0416 16:40:13.601924    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:13.602284    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:13.602284    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:13.605974    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:13.606102    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:13.606102    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:13.606102    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:13.606102    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:13.606102    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:13.606183    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:13 GMT
	I0416 16:40:13.606183    6320 round_trippers.go:580]     Audit-Id: bc31574f-9451-4a57-9b5d-a829fd6cb336
	I0416 16:40:13.606318    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-538700","namespace":"kube-system","uid":"b998d2aa-a709-4f30-ad47-3ec27ce8774d","resourceVersion":"492","creationTimestamp":"2024-04-16T16:38:05Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.95.169:2379","kubernetes.io/config.hash":"d2892e2ccdcac15ef7d9b5d9e9b5179d","kubernetes.io/config.mirror":"d2892e2ccdcac15ef7d9b5d9e9b5179d","kubernetes.io/config.seen":"2024-04-16T16:38:05.018028412Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6597 chars]
	I0416 16:40:13.607395    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:13.607494    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:13.607494    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:13.607494    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:13.609762    6320 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:40:13.609762    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:13.609762    6320 round_trippers.go:580]     Audit-Id: 3ce4b01f-f6d4-49ea-adc4-d418b0784006
	I0416 16:40:13.609762    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:13.609762    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:13.609762    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:13.609762    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:13.609762    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:13 GMT
	I0416 16:40:13.610254    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:14.103459    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/etcd-functional-538700
	I0416 16:40:14.103459    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:14.103579    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:14.103579    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:14.106818    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:14.106818    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:14.106818    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:14.106818    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:14 GMT
	I0416 16:40:14.106818    6320 round_trippers.go:580]     Audit-Id: 235dbed2-2b7a-4a71-837f-7a3a9391d31e
	I0416 16:40:14.106818    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:14.106818    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:14.106818    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:14.107336    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-538700","namespace":"kube-system","uid":"b998d2aa-a709-4f30-ad47-3ec27ce8774d","resourceVersion":"492","creationTimestamp":"2024-04-16T16:38:05Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.95.169:2379","kubernetes.io/config.hash":"d2892e2ccdcac15ef7d9b5d9e9b5179d","kubernetes.io/config.mirror":"d2892e2ccdcac15ef7d9b5d9e9b5179d","kubernetes.io/config.seen":"2024-04-16T16:38:05.018028412Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6597 chars]
	I0416 16:40:14.108061    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:14.108180    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:14.108180    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:14.108180    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:14.110592    6320 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:40:14.110592    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:14.110592    6320 round_trippers.go:580]     Audit-Id: 19edc876-811a-43a4-94d9-4d8f4c5e72d3
	I0416 16:40:14.110592    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:14.110592    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:14.110592    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:14.110592    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:14.110592    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:14 GMT
	I0416 16:40:14.111587    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:14.111950    6320 pod_ready.go:102] pod "etcd-functional-538700" in "kube-system" namespace has status "Ready":"False"
	I0416 16:40:14.600949    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/etcd-functional-538700
	I0416 16:40:14.601079    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:14.601079    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:14.601079    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:14.609607    6320 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0416 16:40:14.609607    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:14.609607    6320 round_trippers.go:580]     Audit-Id: 71e1796c-9631-4e33-945c-631939577e35
	I0416 16:40:14.609607    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:14.609607    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:14.609607    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:14.609607    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:14.609607    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:14 GMT
	I0416 16:40:14.610284    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-538700","namespace":"kube-system","uid":"b998d2aa-a709-4f30-ad47-3ec27ce8774d","resourceVersion":"572","creationTimestamp":"2024-04-16T16:38:05Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.95.169:2379","kubernetes.io/config.hash":"d2892e2ccdcac15ef7d9b5d9e9b5179d","kubernetes.io/config.mirror":"d2892e2ccdcac15ef7d9b5d9e9b5179d","kubernetes.io/config.seen":"2024-04-16T16:38:05.018028412Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6373 chars]
	I0416 16:40:14.610374    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:14.610374    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:14.610374    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:14.610374    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:14.614038    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:14.614038    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:14.614038    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:14.614038    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:14.614038    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:14.614038    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:14.614473    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:14 GMT
	I0416 16:40:14.614473    6320 round_trippers.go:580]     Audit-Id: 3eb02022-9233-427b-8b79-f78a8b8033d1
	I0416 16:40:14.614773    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:14.615118    6320 pod_ready.go:92] pod "etcd-functional-538700" in "kube-system" namespace has status "Ready":"True"
	I0416 16:40:14.615118    6320 pod_ready.go:81] duration metric: took 5.0206581s for pod "etcd-functional-538700" in "kube-system" namespace to be "Ready" ...
	I0416 16:40:14.615118    6320 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-538700" in "kube-system" namespace to be "Ready" ...
	I0416 16:40:14.615118    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-538700
	I0416 16:40:14.615118    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:14.615118    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:14.615118    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:14.617970    6320 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:40:14.618977    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:14.618977    6320 round_trippers.go:580]     Audit-Id: e21d2561-8eee-4ca5-adbf-130f3f5d8124
	I0416 16:40:14.618977    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:14.618977    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:14.618977    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:14.618977    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:14.618977    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:14 GMT
	I0416 16:40:14.619226    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-538700","namespace":"kube-system","uid":"20d2bda4-fd6f-4316-8e79-79522df9a7d9","resourceVersion":"570","creationTimestamp":"2024-04-16T16:38:05Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.95.169:8441","kubernetes.io/config.hash":"b9731fbfffcb2fed7b22d6c9e1cde727","kubernetes.io/config.mirror":"b9731fbfffcb2fed7b22d6c9e1cde727","kubernetes.io/config.seen":"2024-04-16T16:38:05.018032712Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7904 chars]
	I0416 16:40:14.619896    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:14.619943    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:14.619943    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:14.619974    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:14.623170    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:14.623170    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:14.623170    6320 round_trippers.go:580]     Audit-Id: 400d0a21-a428-4ef3-8155-837892283603
	I0416 16:40:14.623170    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:14.623170    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:14.623170    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:14.623170    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:14.623170    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:14 GMT
	I0416 16:40:14.624216    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:14.624643    6320 pod_ready.go:92] pod "kube-apiserver-functional-538700" in "kube-system" namespace has status "Ready":"True"
	I0416 16:40:14.624718    6320 pod_ready.go:81] duration metric: took 9.5986ms for pod "kube-apiserver-functional-538700" in "kube-system" namespace to be "Ready" ...
	I0416 16:40:14.624718    6320 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-538700" in "kube-system" namespace to be "Ready" ...
	I0416 16:40:14.624799    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-538700
	I0416 16:40:14.624873    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:14.624873    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:14.624873    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:14.630097    6320 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 16:40:14.630097    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:14.630097    6320 round_trippers.go:580]     Audit-Id: 3d713133-051d-43e1-b19e-7e346e58d801
	I0416 16:40:14.630097    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:14.630097    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:14.630097    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:14.630097    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:14.630097    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:14 GMT
	I0416 16:40:14.630621    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-538700","namespace":"kube-system","uid":"633942b9-3eee-4088-80fb-a6e12193048a","resourceVersion":"559","creationTimestamp":"2024-04-16T16:38:05Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3ebac637e6b16c653fe983bd6cadcc87","kubernetes.io/config.mirror":"3ebac637e6b16c653fe983bd6cadcc87","kubernetes.io/config.seen":"2024-04-16T16:38:05.018033912Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7472 chars]
	I0416 16:40:14.630844    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:14.630844    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:14.630844    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:14.630844    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:14.637945    6320 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0416 16:40:14.637945    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:14.637945    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:14.637945    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:14 GMT
	I0416 16:40:14.637945    6320 round_trippers.go:580]     Audit-Id: cecf8345-5903-4ba9-aae0-e58336f046c6
	I0416 16:40:14.637945    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:14.637945    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:14.638875    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:14.639052    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:14.639509    6320 pod_ready.go:92] pod "kube-controller-manager-functional-538700" in "kube-system" namespace has status "Ready":"True"
	I0416 16:40:14.639549    6320 pod_ready.go:81] duration metric: took 14.7907ms for pod "kube-controller-manager-functional-538700" in "kube-system" namespace to be "Ready" ...
	I0416 16:40:14.639578    6320 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-29dsg" in "kube-system" namespace to be "Ready" ...
	I0416 16:40:14.639691    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/kube-proxy-29dsg
	I0416 16:40:14.639720    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:14.639720    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:14.639758    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:14.641965    6320 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:40:14.641965    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:14.641965    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:14.641965    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:14 GMT
	I0416 16:40:14.641965    6320 round_trippers.go:580]     Audit-Id: d02d8dd6-76cf-4665-b0c2-2350cd2d4b36
	I0416 16:40:14.641965    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:14.641965    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:14.641965    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:14.641965    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-29dsg","generateName":"kube-proxy-","namespace":"kube-system","uid":"93b5000d-9b1b-4346-9f8d-73e52b42af0e","resourceVersion":"521","creationTimestamp":"2024-04-16T16:38:17Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"9c994168-2502-4985-b9be-05f45127800c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9c994168-2502-4985-b9be-05f45127800c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6035 chars]
	I0416 16:40:14.643690    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:14.643690    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:14.643690    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:14.643690    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:14.646240    6320 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:40:14.646240    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:14.646240    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:14 GMT
	I0416 16:40:14.646240    6320 round_trippers.go:580]     Audit-Id: 9aa4d20a-1187-4a7a-beb5-5eb462452499
	I0416 16:40:14.646240    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:14.646240    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:14.646240    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:14.646240    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:14.646827    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:14.647370    6320 pod_ready.go:92] pod "kube-proxy-29dsg" in "kube-system" namespace has status "Ready":"True"
	I0416 16:40:14.647411    6320 pod_ready.go:81] duration metric: took 7.833ms for pod "kube-proxy-29dsg" in "kube-system" namespace to be "Ready" ...
	I0416 16:40:14.647411    6320 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-538700" in "kube-system" namespace to be "Ready" ...
	I0416 16:40:14.647557    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-538700
	I0416 16:40:14.647557    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:14.647594    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:14.647594    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:14.649815    6320 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:40:14.649815    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:14.649815    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:14.649815    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:14.649815    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:14.649815    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:14 GMT
	I0416 16:40:14.649815    6320 round_trippers.go:580]     Audit-Id: 66e243b1-235d-4009-a9bf-7069a4c3fb1f
	I0416 16:40:14.649815    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:14.650347    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-538700","namespace":"kube-system","uid":"1c487e8d-bb63-4f07-a10f-bf8c2fbb4974","resourceVersion":"558","creationTimestamp":"2024-04-16T16:38:05Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5b509c7d6e730742751e21c9b3d6542b","kubernetes.io/config.mirror":"5b509c7d6e730742751e21c9b3d6542b","kubernetes.io/config.seen":"2024-04-16T16:38:05.018035312Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5202 chars]
	I0416 16:40:14.650843    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:14.650882    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:14.650882    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:14.650882    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:14.653048    6320 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:40:14.653375    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:14.653375    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:14.653375    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:14.653375    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:14.653423    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:14 GMT
	I0416 16:40:14.653423    6320 round_trippers.go:580]     Audit-Id: 39d11955-448d-48ef-9787-2a2503110645
	I0416 16:40:14.653423    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:14.653728    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:14.654170    6320 pod_ready.go:92] pod "kube-scheduler-functional-538700" in "kube-system" namespace has status "Ready":"True"
	I0416 16:40:14.654215    6320 pod_ready.go:81] duration metric: took 6.7736ms for pod "kube-scheduler-functional-538700" in "kube-system" namespace to be "Ready" ...
	I0416 16:40:14.654245    6320 pod_ready.go:38] duration metric: took 14.1078361s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 16:40:14.654285    6320 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 16:40:14.679049    6320 command_runner.go:130] > -16
	I0416 16:40:14.679175    6320 ops.go:34] apiserver oom_adj: -16
	I0416 16:40:14.679175    6320 kubeadm.go:591] duration metric: took 22.6674556s to restartPrimaryControlPlane
	I0416 16:40:14.679175    6320 kubeadm.go:393] duration metric: took 22.722651s to StartCluster
	I0416 16:40:14.679244    6320 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:40:14.679486    6320 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:40:14.680875    6320 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:40:14.682759    6320 start.go:234] Will wait 6m0s for node &{Name: IP:172.19.95.169 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 16:40:14.684083    6320 out.go:177] * Verifying Kubernetes components...
	I0416 16:40:14.683060    6320 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 16:40:14.683207    6320 config.go:182] Loaded profile config "functional-538700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:40:14.684148    6320 addons.go:69] Setting default-storageclass=true in profile "functional-538700"
	I0416 16:40:14.684148    6320 addons.go:69] Setting storage-provisioner=true in profile "functional-538700"
	I0416 16:40:14.685150    6320 addons.go:234] Setting addon storage-provisioner=true in "functional-538700"
	I0416 16:40:14.684336    6320 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-538700"
	W0416 16:40:14.685222    6320 addons.go:243] addon storage-provisioner should already be in state true
	I0416 16:40:14.685410    6320 host.go:66] Checking if "functional-538700" exists ...
	I0416 16:40:14.685547    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-538700 ).state
	I0416 16:40:14.686263    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-538700 ).state
	I0416 16:40:14.696721    6320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:40:14.959151    6320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 16:40:14.984268    6320 node_ready.go:35] waiting up to 6m0s for node "functional-538700" to be "Ready" ...
	I0416 16:40:14.984470    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:14.984470    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:14.984470    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:14.984470    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:14.987803    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:14.987803    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:14.988821    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:14.988821    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:14.988821    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:15 GMT
	I0416 16:40:14.988821    6320 round_trippers.go:580]     Audit-Id: f430ec8f-1772-42f0-bf4d-b781d83a23af
	I0416 16:40:14.988821    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:14.988821    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:14.989241    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:14.989934    6320 node_ready.go:49] node "functional-538700" has status "Ready":"True"
	I0416 16:40:14.989934    6320 node_ready.go:38] duration metric: took 5.6012ms for node "functional-538700" to be "Ready" ...
	I0416 16:40:14.989934    6320 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 16:40:15.008047    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods
	I0416 16:40:15.008047    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:15.008047    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:15.008047    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:15.012194    6320 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:40:15.012194    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:15.012194    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:15.012490    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:15.012490    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:15.012490    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:15 GMT
	I0416 16:40:15.012490    6320 round_trippers.go:580]     Audit-Id: 5e22ad54-bd25-47cd-8d0e-aa7d835a2bca
	I0416 16:40:15.012490    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:15.013626    6320 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"572"},"items":[{"metadata":{"name":"coredns-76f75df574-s48fs","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"0594a7cf-8c07-464b-916b-37290f0328b7","resourceVersion":"562","creationTimestamp":"2024-04-16T16:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"f71c0d45-8b38-4759-86dc-12aedcccf8f2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f71c0d45-8b38-4759-86dc-12aedcccf8f2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50098 chars]
	I0416 16:40:15.015928    6320 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-s48fs" in "kube-system" namespace to be "Ready" ...
	I0416 16:40:15.215162    6320 request.go:629] Waited for 198.8907ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-s48fs
	I0416 16:40:15.215201    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-s48fs
	I0416 16:40:15.215201    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:15.215201    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:15.215201    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:15.221885    6320 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 16:40:15.221885    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:15.221885    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:15.221885    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:15.221885    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:15.221885    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:15 GMT
	I0416 16:40:15.221885    6320 round_trippers.go:580]     Audit-Id: 50069384-fffb-436d-9ed4-3d52a54baf70
	I0416 16:40:15.221885    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:15.222432    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-s48fs","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"0594a7cf-8c07-464b-916b-37290f0328b7","resourceVersion":"562","creationTimestamp":"2024-04-16T16:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"f71c0d45-8b38-4759-86dc-12aedcccf8f2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f71c0d45-8b38-4759-86dc-12aedcccf8f2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6450 chars]
	I0416 16:40:15.401414    6320 request.go:629] Waited for 178.1163ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:15.401531    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:15.401627    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:15.401627    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:15.401627    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:15.405032    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:15.405032    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:15.405354    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:15.405354    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:15.405354    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:15.405354    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:15.405354    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:15 GMT
	I0416 16:40:15.405354    6320 round_trippers.go:580]     Audit-Id: bd702fda-0420-4468-942b-ce64c0bc42d8
	I0416 16:40:15.405764    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:15.406197    6320 pod_ready.go:92] pod "coredns-76f75df574-s48fs" in "kube-system" namespace has status "Ready":"True"
	I0416 16:40:15.406286    6320 pod_ready.go:81] duration metric: took 390.3366ms for pod "coredns-76f75df574-s48fs" in "kube-system" namespace to be "Ready" ...
	I0416 16:40:15.406428    6320 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-538700" in "kube-system" namespace to be "Ready" ...
	I0416 16:40:15.607691    6320 request.go:629] Waited for 201.1569ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/etcd-functional-538700
	I0416 16:40:15.607912    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/etcd-functional-538700
	I0416 16:40:15.607912    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:15.607912    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:15.607966    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:15.611237    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:15.611548    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:15.611548    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:15.611548    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:15.611548    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:15 GMT
	I0416 16:40:15.611548    6320 round_trippers.go:580]     Audit-Id: e171585a-98f4-4e12-8af8-7feb7610bbda
	I0416 16:40:15.611548    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:15.611548    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:15.611785    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-538700","namespace":"kube-system","uid":"b998d2aa-a709-4f30-ad47-3ec27ce8774d","resourceVersion":"572","creationTimestamp":"2024-04-16T16:38:05Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.95.169:2379","kubernetes.io/config.hash":"d2892e2ccdcac15ef7d9b5d9e9b5179d","kubernetes.io/config.mirror":"d2892e2ccdcac15ef7d9b5d9e9b5179d","kubernetes.io/config.seen":"2024-04-16T16:38:05.018028412Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6373 chars]
	I0416 16:40:15.813060    6320 request.go:629] Waited for 200.939ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:15.813060    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:15.813301    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:15.813301    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:15.813301    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:15.817716    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:15.817716    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:15.817788    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:15.817788    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:15.817788    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:15 GMT
	I0416 16:40:15.817788    6320 round_trippers.go:580]     Audit-Id: 4a3aecaf-664b-4c8d-87a5-7199ef20d892
	I0416 16:40:15.817788    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:15.817788    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:15.817889    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:15.818572    6320 pod_ready.go:92] pod "etcd-functional-538700" in "kube-system" namespace has status "Ready":"True"
	I0416 16:40:15.818572    6320 pod_ready.go:81] duration metric: took 412.1211ms for pod "etcd-functional-538700" in "kube-system" namespace to be "Ready" ...
	I0416 16:40:15.818572    6320 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-538700" in "kube-system" namespace to be "Ready" ...
	I0416 16:40:16.004836    6320 request.go:629] Waited for 186.1533ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-538700
	I0416 16:40:16.005188    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-538700
	I0416 16:40:16.005188    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:16.005272    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:16.005272    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:16.009378    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:16.009378    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:16.009378    6320 round_trippers.go:580]     Audit-Id: 9dfdc7fd-6833-4a6c-a6e5-feeebbbd9d76
	I0416 16:40:16.009474    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:16.009474    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:16.009474    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:16.009474    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:16.009474    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:16 GMT
	I0416 16:40:16.009771    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-538700","namespace":"kube-system","uid":"20d2bda4-fd6f-4316-8e79-79522df9a7d9","resourceVersion":"570","creationTimestamp":"2024-04-16T16:38:05Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.95.169:8441","kubernetes.io/config.hash":"b9731fbfffcb2fed7b22d6c9e1cde727","kubernetes.io/config.mirror":"b9731fbfffcb2fed7b22d6c9e1cde727","kubernetes.io/config.seen":"2024-04-16T16:38:05.018032712Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7904 chars]
	I0416 16:40:16.211648    6320 request.go:629] Waited for 201.1287ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:16.212007    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:16.212007    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:16.212007    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:16.212007    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:16.215196    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:16.215196    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:16.215196    6320 round_trippers.go:580]     Audit-Id: 92705c74-7196-43e6-b130-2236e4cbd759
	I0416 16:40:16.215785    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:16.215785    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:16.215785    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:16.215785    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:16.215785    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:16 GMT
	I0416 16:40:16.215924    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:16.216676    6320 pod_ready.go:92] pod "kube-apiserver-functional-538700" in "kube-system" namespace has status "Ready":"True"
	I0416 16:40:16.216676    6320 pod_ready.go:81] duration metric: took 397.9811ms for pod "kube-apiserver-functional-538700" in "kube-system" namespace to be "Ready" ...
	I0416 16:40:16.216725    6320 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-538700" in "kube-system" namespace to be "Ready" ...
	I0416 16:40:16.403657    6320 request.go:629] Waited for 186.8236ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-538700
	I0416 16:40:16.403883    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-538700
	I0416 16:40:16.403883    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:16.403883    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:16.403966    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:16.407257    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:16.407257    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:16.407949    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:16.407949    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:16.407949    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:16 GMT
	I0416 16:40:16.407949    6320 round_trippers.go:580]     Audit-Id: d65610c9-ed24-41b8-a208-46ac925b547c
	I0416 16:40:16.407949    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:16.407949    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:16.408340    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-538700","namespace":"kube-system","uid":"633942b9-3eee-4088-80fb-a6e12193048a","resourceVersion":"559","creationTimestamp":"2024-04-16T16:38:05Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3ebac637e6b16c653fe983bd6cadcc87","kubernetes.io/config.mirror":"3ebac637e6b16c653fe983bd6cadcc87","kubernetes.io/config.seen":"2024-04-16T16:38:05.018033912Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7472 chars]
	I0416 16:40:16.608545    6320 request.go:629] Waited for 199.9207ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:16.609033    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:16.609033    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:16.609033    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:16.609033    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:16.612907    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:16.612907    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:16.612907    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:16 GMT
	I0416 16:40:16.612907    6320 round_trippers.go:580]     Audit-Id: 32d1724a-8450-4f75-ae44-85686c4cbbcd
	I0416 16:40:16.612907    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:16.612907    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:16.612907    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:16.612907    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:16.613215    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:16.613718    6320 pod_ready.go:92] pod "kube-controller-manager-functional-538700" in "kube-system" namespace has status "Ready":"True"
	I0416 16:40:16.613718    6320 pod_ready.go:81] duration metric: took 396.9705ms for pod "kube-controller-manager-functional-538700" in "kube-system" namespace to be "Ready" ...
	I0416 16:40:16.613718    6320 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-29dsg" in "kube-system" namespace to be "Ready" ...
	I0416 16:40:16.698822    6320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:40:16.698822    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:40:16.699741    6320 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:40:16.700289    6320 kapi.go:59] client config for functional-538700: &rest.Config{Host:"https://172.19.95.169:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-538700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-538700\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef16c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 16:40:16.701041    6320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:40:16.701041    6320 addons.go:234] Setting addon default-storageclass=true in "functional-538700"
	I0416 16:40:16.701041    6320 main.go:141] libmachine: [stderr =====>] : 
	W0416 16:40:16.701041    6320 addons.go:243] addon default-storageclass should already be in state true
	I0416 16:40:16.701125    6320 host.go:66] Checking if "functional-538700" exists ...
	I0416 16:40:16.701933    6320 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 16:40:16.702008    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-538700 ).state
	I0416 16:40:16.702683    6320 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 16:40:16.702683    6320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 16:40:16.702769    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-538700 ).state
	I0416 16:40:16.812287    6320 request.go:629] Waited for 198.4154ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/kube-proxy-29dsg
	I0416 16:40:16.812287    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/kube-proxy-29dsg
	I0416 16:40:16.812287    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:16.812287    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:16.812287    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:16.817259    6320 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:40:16.817259    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:16.817259    6320 round_trippers.go:580]     Audit-Id: 930fb338-fede-4636-bab7-15ace1312710
	I0416 16:40:16.817259    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:16.817373    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:16.817373    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:16.817373    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:16.817373    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:16 GMT
	I0416 16:40:16.817774    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-29dsg","generateName":"kube-proxy-","namespace":"kube-system","uid":"93b5000d-9b1b-4346-9f8d-73e52b42af0e","resourceVersion":"521","creationTimestamp":"2024-04-16T16:38:17Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"9c994168-2502-4985-b9be-05f45127800c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9c994168-2502-4985-b9be-05f45127800c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6035 chars]
	I0416 16:40:17.002834    6320 request.go:629] Waited for 184.0408ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:17.002834    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:17.002834    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:17.002834    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:17.002834    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:17.006576    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:17.007522    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:17.007522    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:17.007522    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:17.007522    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:17.007522    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:17.007522    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:17 GMT
	I0416 16:40:17.007522    6320 round_trippers.go:580]     Audit-Id: 407907f6-03d5-4898-b223-f6dd4e52b81e
	I0416 16:40:17.007657    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:17.008198    6320 pod_ready.go:92] pod "kube-proxy-29dsg" in "kube-system" namespace has status "Ready":"True"
	I0416 16:40:17.008198    6320 pod_ready.go:81] duration metric: took 394.3691ms for pod "kube-proxy-29dsg" in "kube-system" namespace to be "Ready" ...
	I0416 16:40:17.008198    6320 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-538700" in "kube-system" namespace to be "Ready" ...
	I0416 16:40:17.208613    6320 request.go:629] Waited for 200.1778ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-538700
	I0416 16:40:17.208613    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-538700
	I0416 16:40:17.208613    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:17.208613    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:17.208613    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:17.214268    6320 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 16:40:17.214268    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:17.214268    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:17.214268    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:17.214268    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:17 GMT
	I0416 16:40:17.214268    6320 round_trippers.go:580]     Audit-Id: 49dd112b-5d12-41de-bd60-91a215a61ada
	I0416 16:40:17.214268    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:17.214268    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:17.214268    6320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-538700","namespace":"kube-system","uid":"1c487e8d-bb63-4f07-a10f-bf8c2fbb4974","resourceVersion":"558","creationTimestamp":"2024-04-16T16:38:05Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5b509c7d6e730742751e21c9b3d6542b","kubernetes.io/config.mirror":"5b509c7d6e730742751e21c9b3d6542b","kubernetes.io/config.seen":"2024-04-16T16:38:05.018035312Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5202 chars]
	I0416 16:40:17.416811    6320 request.go:629] Waited for 201.1779ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:17.416978    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes/functional-538700
	I0416 16:40:17.416978    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:17.416978    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:17.416978    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:17.420250    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:17.420250    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:17.420250    6320 round_trippers.go:580]     Audit-Id: 70fc5c12-24b4-44ee-861d-80df222f1f63
	I0416 16:40:17.420250    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:17.420250    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:17.420250    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:17.420250    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:17.420250    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:17 GMT
	I0416 16:40:17.421346    6320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-16T16:38:01Z","fieldsType":"Fie [truncated 4794 chars]
	I0416 16:40:17.421995    6320 pod_ready.go:92] pod "kube-scheduler-functional-538700" in "kube-system" namespace has status "Ready":"True"
	I0416 16:40:17.421995    6320 pod_ready.go:81] duration metric: took 413.7743ms for pod "kube-scheduler-functional-538700" in "kube-system" namespace to be "Ready" ...
	I0416 16:40:17.422094    6320 pod_ready.go:38] duration metric: took 2.431921s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 16:40:17.422094    6320 api_server.go:52] waiting for apiserver process to appear ...
	I0416 16:40:17.431600    6320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 16:40:17.455052    6320 command_runner.go:130] > 4778
	I0416 16:40:17.455362    6320 api_server.go:72] duration metric: took 2.7723289s to wait for apiserver process to appear ...
	I0416 16:40:17.455545    6320 api_server.go:88] waiting for apiserver healthz status ...
	I0416 16:40:17.455627    6320 api_server.go:253] Checking apiserver healthz at https://172.19.95.169:8441/healthz ...
	I0416 16:40:17.464324    6320 api_server.go:279] https://172.19.95.169:8441/healthz returned 200:
	ok
	I0416 16:40:17.464704    6320 round_trippers.go:463] GET https://172.19.95.169:8441/version
	I0416 16:40:17.464704    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:17.464704    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:17.464752    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:17.465931    6320 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0416 16:40:17.466546    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:17.466546    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:17.466546    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:17.466546    6320 round_trippers.go:580]     Content-Length: 263
	I0416 16:40:17.466546    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:17 GMT
	I0416 16:40:17.466546    6320 round_trippers.go:580]     Audit-Id: 132f9f5b-05ae-4eb2-b107-28aba636322e
	I0416 16:40:17.466546    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:17.466546    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:17.466546    6320 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "29",
	  "gitVersion": "v1.29.3",
	  "gitCommit": "6813625b7cd706db5bc7388921be03071e1a492d",
	  "gitTreeState": "clean",
	  "buildDate": "2024-03-14T23:58:36Z",
	  "goVersion": "go1.21.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0416 16:40:17.466546    6320 api_server.go:141] control plane version: v1.29.3
	I0416 16:40:17.466546    6320 api_server.go:131] duration metric: took 11.0004ms to wait for apiserver health ...
	I0416 16:40:17.466546    6320 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 16:40:17.604558    6320 request.go:629] Waited for 137.8352ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods
	I0416 16:40:17.604558    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods
	I0416 16:40:17.604558    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:17.604558    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:17.604558    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:17.613247    6320 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0416 16:40:17.613595    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:17.613595    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:17.613595    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:17.613595    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:17 GMT
	I0416 16:40:17.613595    6320 round_trippers.go:580]     Audit-Id: 59b3478f-b4dd-47df-b144-8380b93aa3e6
	I0416 16:40:17.613595    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:17.613595    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:17.614947    6320 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"572"},"items":[{"metadata":{"name":"coredns-76f75df574-s48fs","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"0594a7cf-8c07-464b-916b-37290f0328b7","resourceVersion":"562","creationTimestamp":"2024-04-16T16:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"f71c0d45-8b38-4759-86dc-12aedcccf8f2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f71c0d45-8b38-4759-86dc-12aedcccf8f2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50098 chars]
	I0416 16:40:17.616808    6320 system_pods.go:59] 7 kube-system pods found
	I0416 16:40:17.616808    6320 system_pods.go:61] "coredns-76f75df574-s48fs" [0594a7cf-8c07-464b-916b-37290f0328b7] Running
	I0416 16:40:17.616808    6320 system_pods.go:61] "etcd-functional-538700" [b998d2aa-a709-4f30-ad47-3ec27ce8774d] Running
	I0416 16:40:17.616808    6320 system_pods.go:61] "kube-apiserver-functional-538700" [20d2bda4-fd6f-4316-8e79-79522df9a7d9] Running
	I0416 16:40:17.617394    6320 system_pods.go:61] "kube-controller-manager-functional-538700" [633942b9-3eee-4088-80fb-a6e12193048a] Running
	I0416 16:40:17.617394    6320 system_pods.go:61] "kube-proxy-29dsg" [93b5000d-9b1b-4346-9f8d-73e52b42af0e] Running
	I0416 16:40:17.617394    6320 system_pods.go:61] "kube-scheduler-functional-538700" [1c487e8d-bb63-4f07-a10f-bf8c2fbb4974] Running
	I0416 16:40:17.617394    6320 system_pods.go:61] "storage-provisioner" [2526ffa5-f4ff-4859-9389-2b1bde0ea350] Running
	I0416 16:40:17.617473    6320 system_pods.go:74] duration metric: took 150.8398ms to wait for pod list to return data ...
	I0416 16:40:17.617548    6320 default_sa.go:34] waiting for default service account to be created ...
	I0416 16:40:17.812805    6320 request.go:629] Waited for 194.775ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.95.169:8441/api/v1/namespaces/default/serviceaccounts
	I0416 16:40:17.812933    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/default/serviceaccounts
	I0416 16:40:17.813063    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:17.813101    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:17.813101    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:17.821087    6320 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0416 16:40:17.821087    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:17.821891    6320 round_trippers.go:580]     Content-Length: 261
	I0416 16:40:17.821891    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:17 GMT
	I0416 16:40:17.821891    6320 round_trippers.go:580]     Audit-Id: e0a2e63e-7844-4060-974e-67a8e22b0eed
	I0416 16:40:17.821891    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:17.821891    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:17.821891    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:17.821891    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:17.821943    6320 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"572"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"c8cd9c1a-c7db-4fe9-9d68-9835d4badb4c","resourceVersion":"342","creationTimestamp":"2024-04-16T16:38:16Z"}}]}
	I0416 16:40:17.821943    6320 default_sa.go:45] found service account: "default"
	I0416 16:40:17.821943    6320 default_sa.go:55] duration metric: took 204.2291ms for default service account to be created ...
	I0416 16:40:17.821943    6320 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 16:40:18.016237    6320 request.go:629] Waited for 194.2825ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods
	I0416 16:40:18.016454    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/namespaces/kube-system/pods
	I0416 16:40:18.016454    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:18.016454    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:18.016454    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:18.020763    6320 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:40:18.020763    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:18.020763    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:18.020763    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:18 GMT
	I0416 16:40:18.020763    6320 round_trippers.go:580]     Audit-Id: af84a63c-75a9-4d12-840a-8d590d1b21d0
	I0416 16:40:18.020763    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:18.020763    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:18.020763    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:18.025908    6320 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"572"},"items":[{"metadata":{"name":"coredns-76f75df574-s48fs","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"0594a7cf-8c07-464b-916b-37290f0328b7","resourceVersion":"562","creationTimestamp":"2024-04-16T16:38:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"f71c0d45-8b38-4759-86dc-12aedcccf8f2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T16:38:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f71c0d45-8b38-4759-86dc-12aedcccf8f2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50098 chars]
	I0416 16:40:18.028806    6320 system_pods.go:86] 7 kube-system pods found
	I0416 16:40:18.028806    6320 system_pods.go:89] "coredns-76f75df574-s48fs" [0594a7cf-8c07-464b-916b-37290f0328b7] Running
	I0416 16:40:18.028806    6320 system_pods.go:89] "etcd-functional-538700" [b998d2aa-a709-4f30-ad47-3ec27ce8774d] Running
	I0416 16:40:18.028891    6320 system_pods.go:89] "kube-apiserver-functional-538700" [20d2bda4-fd6f-4316-8e79-79522df9a7d9] Running
	I0416 16:40:18.028891    6320 system_pods.go:89] "kube-controller-manager-functional-538700" [633942b9-3eee-4088-80fb-a6e12193048a] Running
	I0416 16:40:18.028891    6320 system_pods.go:89] "kube-proxy-29dsg" [93b5000d-9b1b-4346-9f8d-73e52b42af0e] Running
	I0416 16:40:18.028891    6320 system_pods.go:89] "kube-scheduler-functional-538700" [1c487e8d-bb63-4f07-a10f-bf8c2fbb4974] Running
	I0416 16:40:18.028891    6320 system_pods.go:89] "storage-provisioner" [2526ffa5-f4ff-4859-9389-2b1bde0ea350] Running
	I0416 16:40:18.028891    6320 system_pods.go:126] duration metric: took 206.9364ms to wait for k8s-apps to be running ...
	I0416 16:40:18.028891    6320 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 16:40:18.038077    6320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:40:18.063747    6320 system_svc.go:56] duration metric: took 34.8533ms WaitForService to wait for kubelet
	I0416 16:40:18.063869    6320 kubeadm.go:576] duration metric: took 3.3806793s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 16:40:18.063869    6320 node_conditions.go:102] verifying NodePressure condition ...
	I0416 16:40:18.205314    6320 request.go:629] Waited for 141.2176ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.95.169:8441/api/v1/nodes
	I0416 16:40:18.205532    6320 round_trippers.go:463] GET https://172.19.95.169:8441/api/v1/nodes
	I0416 16:40:18.205532    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:18.205532    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:18.205532    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:18.211805    6320 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 16:40:18.211867    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:18.211911    6320 round_trippers.go:580]     Audit-Id: ae44ad62-728c-4d07-a6af-d3af4db308e7
	I0416 16:40:18.211911    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:18.211967    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:18.211967    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:18.212005    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:18.212037    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:18 GMT
	I0416 16:40:18.212351    6320 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"574"},"items":[{"metadata":{"name":"functional-538700","uid":"b7bd995f-fd79-4837-91d5-c8e9a77a57fd","resourceVersion":"489","creationTimestamp":"2024-04-16T16:38:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-538700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"functional-538700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T16_38_04_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"m
anagedFields":[{"manager":"kubelet","operation":"Update","apiVersion":" [truncated 4847 chars]
	I0416 16:40:18.213283    6320 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 16:40:18.213283    6320 node_conditions.go:123] node cpu capacity is 2
	I0416 16:40:18.213283    6320 node_conditions.go:105] duration metric: took 149.4054ms to run NodePressure ...
	I0416 16:40:18.213283    6320 start.go:240] waiting for startup goroutines ...
	I0416 16:40:18.678791    6320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:40:18.678791    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:40:18.679855    6320 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 16:40:18.679875    6320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 16:40:18.679949    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-538700 ).state
	I0416 16:40:18.739212    6320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:40:18.739212    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:40:18.739303    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-538700 ).networkadapters[0]).ipaddresses[0]
	I0416 16:40:20.675148    6320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:40:20.675148    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:40:20.675271    6320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-538700 ).networkadapters[0]).ipaddresses[0]
	I0416 16:40:21.089184    6320 main.go:141] libmachine: [stdout =====>] : 172.19.95.169
	
	I0416 16:40:21.089184    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:40:21.090387    6320 sshutil.go:53] new ssh client: &{IP:172.19.95.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-538700\id_rsa Username:docker}
	I0416 16:40:21.225887    6320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 16:40:21.916342    6320 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0416 16:40:21.916342    6320 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0416 16:40:21.916342    6320 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0416 16:40:21.916787    6320 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0416 16:40:21.916787    6320 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0416 16:40:21.916787    6320 command_runner.go:130] > pod/storage-provisioner configured
	I0416 16:40:22.940545    6320 main.go:141] libmachine: [stdout =====>] : 172.19.95.169
	
	I0416 16:40:22.940545    6320 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:40:22.941231    6320 sshutil.go:53] new ssh client: &{IP:172.19.95.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-538700\id_rsa Username:docker}
	I0416 16:40:23.073171    6320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 16:40:23.228529    6320 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0416 16:40:23.228900    6320 round_trippers.go:463] GET https://172.19.95.169:8441/apis/storage.k8s.io/v1/storageclasses
	I0416 16:40:23.228954    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:23.229006    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:23.229006    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:23.232791    6320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:40:23.232828    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:23.232828    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:23.232828    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:23.232828    6320 round_trippers.go:580]     Content-Length: 1273
	I0416 16:40:23.232828    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:23 GMT
	I0416 16:40:23.232828    6320 round_trippers.go:580]     Audit-Id: 9bdaed00-c082-4cba-b1df-fb5a7bf893ce
	I0416 16:40:23.232967    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:23.232967    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:23.233074    6320 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"579"},"items":[{"metadata":{"name":"standard","uid":"51e431af-00c2-4026-89ea-6624af760d7e","resourceVersion":"428","creationTimestamp":"2024-04-16T16:38:25Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-16T16:38:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0416 16:40:23.233884    6320 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"51e431af-00c2-4026-89ea-6624af760d7e","resourceVersion":"428","creationTimestamp":"2024-04-16T16:38:25Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-16T16:38:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0416 16:40:23.233884    6320 round_trippers.go:463] PUT https://172.19.95.169:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0416 16:40:23.233884    6320 round_trippers.go:469] Request Headers:
	I0416 16:40:23.233884    6320 round_trippers.go:473]     Content-Type: application/json
	I0416 16:40:23.233884    6320 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:40:23.233884    6320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:40:23.239694    6320 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 16:40:23.239694    6320 round_trippers.go:577] Response Headers:
	I0416 16:40:23.239694    6320 round_trippers.go:580]     Date: Tue, 16 Apr 2024 16:40:23 GMT
	I0416 16:40:23.239694    6320 round_trippers.go:580]     Audit-Id: f69657cf-38b2-4275-a1c4-7b339a55ba2f
	I0416 16:40:23.239694    6320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 16:40:23.239694    6320 round_trippers.go:580]     Content-Type: application/json
	I0416 16:40:23.239694    6320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5f6af55-6e96-4c55-82f7-5f3e4fa0d04b
	I0416 16:40:23.239694    6320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad73ed5c-fdd7-47fa-b692-2104ff38e652
	I0416 16:40:23.239694    6320 round_trippers.go:580]     Content-Length: 1220
	I0416 16:40:23.239694    6320 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"51e431af-00c2-4026-89ea-6624af760d7e","resourceVersion":"428","creationTimestamp":"2024-04-16T16:38:25Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-16T16:38:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0416 16:40:23.241754    6320 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0416 16:40:23.242475    6320 addons.go:505] duration metric: took 8.5589297s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0416 16:40:23.242656    6320 start.go:245] waiting for cluster config update ...
	I0416 16:40:23.242681    6320 start.go:254] writing updated cluster config ...
	I0416 16:40:23.252522    6320 ssh_runner.go:195] Run: rm -f paused
	I0416 16:40:23.375761    6320 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0416 16:40:23.376825    6320 out.go:177] * Done! kubectl is now configured to use "functional-538700" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 16 16:40:00 functional-538700 dockerd[3889]: time="2024-04-16T16:40:00.125468241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 16:40:00 functional-538700 dockerd[3889]: time="2024-04-16T16:40:00.125555545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 16:40:00 functional-538700 dockerd[3889]: time="2024-04-16T16:40:00.129488437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 16:40:00 functional-538700 dockerd[3889]: time="2024-04-16T16:40:00.129586242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 16:40:00 functional-538700 dockerd[3889]: time="2024-04-16T16:40:00.129599643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 16:40:00 functional-538700 dockerd[3889]: time="2024-04-16T16:40:00.129759551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 16:40:00 functional-538700 dockerd[3889]: time="2024-04-16T16:40:00.134962005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 16:40:00 functional-538700 dockerd[3889]: time="2024-04-16T16:40:00.135330723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 16:40:00 functional-538700 dockerd[3889]: time="2024-04-16T16:40:00.135514032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 16:40:00 functional-538700 dockerd[3889]: time="2024-04-16T16:40:00.136007156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 16:40:00 functional-538700 cri-dockerd[4161]: time="2024-04-16T16:40:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d3cc9262c21138bea13f7f52b2899b2d374c7ab7b36748b396f865350fb81d74/resolv.conf as [nameserver 172.19.80.1]"
	Apr 16 16:40:00 functional-538700 cri-dockerd[4161]: time="2024-04-16T16:40:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4fa6974b3dbeeb5f0aa209113559cf17915fbb3b9d91db767025c8c51448a635/resolv.conf as [nameserver 172.19.80.1]"
	Apr 16 16:40:00 functional-538700 dockerd[3889]: time="2024-04-16T16:40:00.521325577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 16:40:00 functional-538700 dockerd[3889]: time="2024-04-16T16:40:00.522301525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 16:40:00 functional-538700 dockerd[3889]: time="2024-04-16T16:40:00.522368828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 16:40:00 functional-538700 dockerd[3889]: time="2024-04-16T16:40:00.522552937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 16:40:00 functional-538700 dockerd[3889]: time="2024-04-16T16:40:00.586615666Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 16:40:00 functional-538700 dockerd[3889]: time="2024-04-16T16:40:00.587119591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 16:40:00 functional-538700 dockerd[3889]: time="2024-04-16T16:40:00.587253997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 16:40:00 functional-538700 dockerd[3889]: time="2024-04-16T16:40:00.587486409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 16:40:00 functional-538700 cri-dockerd[4161]: time="2024-04-16T16:40:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2f8fa98064b321d35d94e3b54e8bde87070c80106246754c1defab1cab84341a/resolv.conf as [nameserver 172.19.80.1]"
	Apr 16 16:40:00 functional-538700 dockerd[3889]: time="2024-04-16T16:40:00.921215483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 16:40:00 functional-538700 dockerd[3889]: time="2024-04-16T16:40:00.921462096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 16:40:00 functional-538700 dockerd[3889]: time="2024-04-16T16:40:00.921621804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 16:40:00 functional-538700 dockerd[3889]: time="2024-04-16T16:40:00.921844515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	8c18c95f8f872       cbb01a7bd410d       About a minute ago   Running             coredns                   1                   2f8fa98064b32       coredns-76f75df574-s48fs
	ca99f0d8588e2       6e38f40d628db       About a minute ago   Running             storage-provisioner       1                   4fa6974b3dbee       storage-provisioner
	d1312f168204d       a1d263b5dc5b0       About a minute ago   Running             kube-proxy                1                   d3cc9262c2113       kube-proxy-29dsg
	97f842982d1c5       3861cfcd7c04c       2 minutes ago        Running             etcd                      1                   5229c1eec9e54       etcd-functional-538700
	d0e5c804e8c6c       39f995c9f1996       2 minutes ago        Running             kube-apiserver            1                   6b2d815572670       kube-apiserver-functional-538700
	e90849d26bf6c       8c390d98f50c0       2 minutes ago        Running             kube-scheduler            1                   2d81863e27d54       kube-scheduler-functional-538700
	ea2a2f10fefdc       6052a25da3f97       2 minutes ago        Running             kube-controller-manager   1                   a5fb7152a1075       kube-controller-manager-functional-538700
	edb07b9be7937       6e38f40d628db       3 minutes ago        Exited              storage-provisioner       0                   1738348bc4df1       storage-provisioner
	ad194e71f1e9f       cbb01a7bd410d       3 minutes ago        Exited              coredns                   0                   56dacc3357ec6       coredns-76f75df574-s48fs
	2375d686dc68f       a1d263b5dc5b0       3 minutes ago        Exited              kube-proxy                0                   5b7eadad56793       kube-proxy-29dsg
	b1d710fcddde1       3861cfcd7c04c       3 minutes ago        Exited              etcd                      0                   72869fbe0a507       etcd-functional-538700
	6751872e77129       6052a25da3f97       3 minutes ago        Exited              kube-controller-manager   0                   d93d42b6482f2       kube-controller-manager-functional-538700
	cd15ff71bca22       8c390d98f50c0       3 minutes ago        Exited              kube-scheduler            0                   3e85edf411eac       kube-scheduler-functional-538700
	a7b557b7631ed       39f995c9f1996       3 minutes ago        Exited              kube-apiserver            0                   32d5869cfc525       kube-apiserver-functional-538700
	
	
	==> coredns [8c18c95f8f87] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = db872c9fdf31f8d8ff61123f2a1e38a38b951fa043b9e42cdb76f01d23889e560885a7bdef735e757fd28e65f13e44b1d5d7b5def31861f6a98cd0279fbc18c8
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:53514 - 25400 "HINFO IN 7198561105957607682.6417858966369845935. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.028673313s
	
	
	==> coredns [ad194e71f1e9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = db872c9fdf31f8d8ff61123f2a1e38a38b951fa043b9e42cdb76f01d23889e560885a7bdef735e757fd28e65f13e44b1d5d7b5def31861f6a98cd0279fbc18c8
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48384 - 6565 "HINFO IN 1549649258926989546.8376611798259958088. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.045392793s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-538700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-538700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=functional-538700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T16_38_04_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 16:38:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-538700
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 16:41:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 16:41:31 +0000   Tue, 16 Apr 2024 16:37:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 16:41:31 +0000   Tue, 16 Apr 2024 16:37:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 16:41:31 +0000   Tue, 16 Apr 2024 16:37:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 16:41:31 +0000   Tue, 16 Apr 2024 16:38:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.95.169
	  Hostname:    functional-538700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 2181075aadeb49f5b808c5d9972c369f
	  System UUID:                489f072d-4554-d14b-a155-c5d2818039ac
	  Boot ID:                    c009c7f2-b472-42b8-83a8-827770ff5cd6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-s48fs                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m40s
	  kube-system                 etcd-functional-538700                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3m52s
	  kube-system                 kube-apiserver-functional-538700             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  kube-system                 kube-controller-manager-functional-538700    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  kube-system                 kube-proxy-29dsg                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 kube-scheduler-functional-538700             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m38s                kube-proxy       
	  Normal  Starting                 116s                 kube-proxy       
	  Normal  NodeHasSufficientPID     4m (x7 over 4m)      kubelet          Node functional-538700 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    4m (x8 over 4m)      kubelet          Node functional-538700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  4m (x8 over 4m)      kubelet          Node functional-538700 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m                   kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 3m53s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m52s                kubelet          Node functional-538700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m52s                kubelet          Node functional-538700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m52s                kubelet          Node functional-538700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m52s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m50s                kubelet          Node functional-538700 status is now: NodeReady
	  Normal  RegisteredNode           3m41s                node-controller  Node functional-538700 event: Registered Node functional-538700 in Controller
	  Normal  Starting                 2m3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m3s (x8 over 2m3s)  kubelet          Node functional-538700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s (x8 over 2m3s)  kubelet          Node functional-538700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s (x7 over 2m3s)  kubelet          Node functional-538700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           106s                 node-controller  Node functional-538700 event: Registered Node functional-538700 in Controller
	
	
	==> dmesg <==
	[  +0.088743] kauditd_printk_skb: 205 callbacks suppressed
	[  +5.079944] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.584805] systemd-fstab-generator[1529]: Ignoring "noauto" option for root device
	[  +6.025377] systemd-fstab-generator[1723]: Ignoring "noauto" option for root device
	[  +0.086194] kauditd_printk_skb: 51 callbacks suppressed
	[Apr16 16:38] systemd-fstab-generator[2132]: Ignoring "noauto" option for root device
	[  +0.136425] kauditd_printk_skb: 62 callbacks suppressed
	[ +12.336911] systemd-fstab-generator[2343]: Ignoring "noauto" option for root device
	[  +0.198736] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.063196] kauditd_printk_skb: 71 callbacks suppressed
	[Apr16 16:39] systemd-fstab-generator[3392]: Ignoring "noauto" option for root device
	[  +0.579257] systemd-fstab-generator[3427]: Ignoring "noauto" option for root device
	[  +0.229679] systemd-fstab-generator[3439]: Ignoring "noauto" option for root device
	[  +0.271959] systemd-fstab-generator[3453]: Ignoring "noauto" option for root device
	[  +5.260119] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.831644] systemd-fstab-generator[4045]: Ignoring "noauto" option for root device
	[  +0.185197] systemd-fstab-generator[4057]: Ignoring "noauto" option for root device
	[  +0.180430] systemd-fstab-generator[4069]: Ignoring "noauto" option for root device
	[  +0.238733] systemd-fstab-generator[4091]: Ignoring "noauto" option for root device
	[  +0.758061] systemd-fstab-generator[4308]: Ignoring "noauto" option for root device
	[  +3.300552] systemd-fstab-generator[4424]: Ignoring "noauto" option for root device
	[  +0.103248] kauditd_printk_skb: 140 callbacks suppressed
	[  +5.488151] kauditd_printk_skb: 52 callbacks suppressed
	[Apr16 16:40] kauditd_printk_skb: 31 callbacks suppressed
	[  +3.532609] systemd-fstab-generator[5313]: Ignoring "noauto" option for root device
	
	
	==> etcd [97f842982d1c] <==
	{"level":"info","ts":"2024-04-16T16:39:56.328314Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-16T16:39:56.328327Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-16T16:39:56.331162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18b9879e50d0b855 switched to configuration voters=(1781604241632180309)"}
	{"level":"info","ts":"2024-04-16T16:39:56.331251Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6924fb975f1f00c5","local-member-id":"18b9879e50d0b855","added-peer-id":"18b9879e50d0b855","added-peer-peer-urls":["https://172.19.95.169:2380"]}
	{"level":"info","ts":"2024-04-16T16:39:56.331396Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6924fb975f1f00c5","local-member-id":"18b9879e50d0b855","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T16:39:56.331451Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T16:39:56.332879Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-16T16:39:56.338063Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"18b9879e50d0b855","initial-advertise-peer-urls":["https://172.19.95.169:2380"],"listen-peer-urls":["https://172.19.95.169:2380"],"advertise-client-urls":["https://172.19.95.169:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.95.169:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-16T16:39:56.338266Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-16T16:39:56.338514Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.19.95.169:2380"}
	{"level":"info","ts":"2024-04-16T16:39:56.340943Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.19.95.169:2380"}
	{"level":"info","ts":"2024-04-16T16:39:57.445087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18b9879e50d0b855 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-16T16:39:57.445129Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18b9879e50d0b855 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-16T16:39:57.445239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18b9879e50d0b855 received MsgPreVoteResp from 18b9879e50d0b855 at term 2"}
	{"level":"info","ts":"2024-04-16T16:39:57.445344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18b9879e50d0b855 became candidate at term 3"}
	{"level":"info","ts":"2024-04-16T16:39:57.445424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18b9879e50d0b855 received MsgVoteResp from 18b9879e50d0b855 at term 3"}
	{"level":"info","ts":"2024-04-16T16:39:57.445503Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18b9879e50d0b855 became leader at term 3"}
	{"level":"info","ts":"2024-04-16T16:39:57.44555Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 18b9879e50d0b855 elected leader 18b9879e50d0b855 at term 3"}
	{"level":"info","ts":"2024-04-16T16:39:57.44967Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"18b9879e50d0b855","local-member-attributes":"{Name:functional-538700 ClientURLs:[https://172.19.95.169:2379]}","request-path":"/0/members/18b9879e50d0b855/attributes","cluster-id":"6924fb975f1f00c5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-16T16:39:57.449987Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T16:39:57.452113Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T16:39:57.452687Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-16T16:39:57.45283Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.95.169:2379"}
	{"level":"info","ts":"2024-04-16T16:39:57.453119Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-16T16:39:57.455391Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [b1d710fcddde] <==
	{"level":"info","ts":"2024-04-16T16:37:59.186147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18b9879e50d0b855 became candidate at term 2"}
	{"level":"info","ts":"2024-04-16T16:37:59.18787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18b9879e50d0b855 received MsgVoteResp from 18b9879e50d0b855 at term 2"}
	{"level":"info","ts":"2024-04-16T16:37:59.188001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18b9879e50d0b855 became leader at term 2"}
	{"level":"info","ts":"2024-04-16T16:37:59.188102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 18b9879e50d0b855 elected leader 18b9879e50d0b855 at term 2"}
	{"level":"info","ts":"2024-04-16T16:37:59.192461Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"18b9879e50d0b855","local-member-attributes":"{Name:functional-538700 ClientURLs:[https://172.19.95.169:2379]}","request-path":"/0/members/18b9879e50d0b855/attributes","cluster-id":"6924fb975f1f00c5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-16T16:37:59.192654Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T16:37:59.193319Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-16T16:37:59.193413Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-16T16:37:59.1927Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T16:37:59.192762Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T16:37:59.198614Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6924fb975f1f00c5","local-member-id":"18b9879e50d0b855","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T16:37:59.198832Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T16:37:59.198996Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T16:37:59.202953Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.95.169:2379"}
	{"level":"info","ts":"2024-04-16T16:37:59.20778Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-16T16:39:36.859937Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-16T16:39:36.859983Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-538700","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.19.95.169:2380"],"advertise-client-urls":["https://172.19.95.169:2379"]}
	{"level":"warn","ts":"2024-04-16T16:39:36.860051Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-16T16:39:36.860201Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-16T16:39:36.908792Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 172.19.95.169:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-16T16:39:36.908842Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 172.19.95.169:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-16T16:39:36.908879Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"18b9879e50d0b855","current-leader-member-id":"18b9879e50d0b855"}
	{"level":"info","ts":"2024-04-16T16:39:36.913228Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"172.19.95.169:2380"}
	{"level":"info","ts":"2024-04-16T16:39:36.91332Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"172.19.95.169:2380"}
	{"level":"info","ts":"2024-04-16T16:39:36.91333Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-538700","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.19.95.169:2380"],"advertise-client-urls":["https://172.19.95.169:2379"]}
	
	
	==> kernel <==
	 16:41:57 up 5 min,  0 users,  load average: 0.54, 0.29, 0.12
	Linux functional-538700 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a7b557b7631e] <==
	W0416 16:39:46.160371       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 16:39:46.163691       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 16:39:46.198972       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 16:39:46.205700       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 16:39:46.248213       1 logging.go:59] [core] [Channel #1 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 16:39:46.252299       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 16:39:46.296241       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 16:39:46.304322       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 16:39:46.321289       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 16:39:46.357083       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 16:39:46.387435       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 16:39:46.390476       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 16:39:46.416828       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 16:39:46.451196       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 16:39:46.503185       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 16:39:46.522090       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 16:39:46.586892       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 16:39:46.598537       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 16:39:46.629490       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 16:39:46.676471       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 16:39:46.731033       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 16:39:46.774358       1 logging.go:59] [core] [Channel #184 SubChannel #185] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 16:39:46.839888       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 16:39:46.852855       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 16:39:46.863349       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d0e5c804e8c6] <==
	I0416 16:39:58.782086       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0416 16:39:58.782770       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0416 16:39:58.782901       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0416 16:39:58.842112       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0416 16:39:58.843864       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0416 16:39:58.847107       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0416 16:39:58.847378       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0416 16:39:58.849555       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0416 16:39:58.856157       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	E0416 16:39:58.870046       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0416 16:39:58.883149       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0416 16:39:58.883184       1 aggregator.go:165] initial CRD sync complete...
	I0416 16:39:58.883190       1 autoregister_controller.go:141] Starting autoregister controller
	I0416 16:39:58.883196       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0416 16:39:58.883228       1 cache.go:39] Caches are synced for autoregister controller
	I0416 16:39:58.883514       1 shared_informer.go:318] Caches are synced for configmaps
	I0416 16:39:58.929020       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0416 16:39:59.753448       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0416 16:40:00.525644       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0416 16:40:00.537211       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0416 16:40:00.596190       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0416 16:40:00.668870       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0416 16:40:00.690186       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0416 16:40:11.383676       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0416 16:40:11.417493       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [6751872e7712] <==
	I0416 16:38:16.408060       1 shared_informer.go:318] Caches are synced for resource quota
	I0416 16:38:16.443332       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0416 16:38:16.495800       1 shared_informer.go:318] Caches are synced for endpoint
	I0416 16:38:16.604550       1 event.go:376] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-76f75df574 to 2"
	I0416 16:38:16.832622       1 shared_informer.go:318] Caches are synced for garbage collector
	I0416 16:38:16.845681       1 shared_informer.go:318] Caches are synced for garbage collector
	I0416 16:38:16.845705       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0416 16:38:17.265951       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-29dsg"
	I0416 16:38:17.476287       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-lqtcg"
	I0416 16:38:17.533973       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-s48fs"
	I0416 16:38:17.564022       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="960.471416ms"
	I0416 16:38:17.582713       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="17.039077ms"
	I0416 16:38:17.584033       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="51.403µs"
	I0416 16:38:17.585357       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="513.027µs"
	I0416 16:38:17.804143       1 event.go:376] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-76f75df574 to 1 from 2"
	I0416 16:38:17.814782       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-76f75df574-lqtcg"
	I0416 16:38:17.829207       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="24.498062ms"
	I0416 16:38:17.838821       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="9.11587ms"
	I0416 16:38:17.839170       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="39.802µs"
	I0416 16:38:19.339787       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="45.203µs"
	I0416 16:38:19.354098       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="63.603µs"
	I0416 16:38:19.360432       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="122.806µs"
	I0416 16:38:20.350157       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="50.502µs"
	I0416 16:38:20.395000       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="16.403201ms"
	I0416 16:38:20.395545       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="66.203µs"
	
	
	==> kube-controller-manager [ea2a2f10fefd] <==
	I0416 16:40:11.346697       1 shared_informer.go:318] Caches are synced for taint-eviction-controller
	I0416 16:40:11.356727       1 shared_informer.go:318] Caches are synced for HPA
	I0416 16:40:11.358028       1 shared_informer.go:318] Caches are synced for job
	I0416 16:40:11.371101       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0416 16:40:11.376805       1 shared_informer.go:318] Caches are synced for resource quota
	I0416 16:40:11.383057       1 shared_informer.go:318] Caches are synced for deployment
	I0416 16:40:11.389891       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0416 16:40:11.391072       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="72.005µs"
	I0416 16:40:11.391249       1 shared_informer.go:318] Caches are synced for endpoint
	I0416 16:40:11.394009       1 shared_informer.go:318] Caches are synced for persistent volume
	I0416 16:40:11.397681       1 shared_informer.go:318] Caches are synced for legacy-service-account-token-cleaner
	I0416 16:40:11.405213       1 shared_informer.go:318] Caches are synced for daemon sets
	I0416 16:40:11.406459       1 shared_informer.go:318] Caches are synced for stateful set
	I0416 16:40:11.409057       1 shared_informer.go:318] Caches are synced for PVC protection
	I0416 16:40:11.409857       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0416 16:40:11.419180       1 shared_informer.go:318] Caches are synced for taint
	I0416 16:40:11.419856       1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone=""
	I0416 16:40:11.420158       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-538700"
	I0416 16:40:11.420478       1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0416 16:40:11.420656       1 event.go:376] "Event occurred" object="functional-538700" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-538700 event: Registered Node functional-538700 in Controller"
	I0416 16:40:11.433881       1 shared_informer.go:318] Caches are synced for resource quota
	I0416 16:40:11.444390       1 shared_informer.go:318] Caches are synced for disruption
	I0416 16:40:11.815479       1 shared_informer.go:318] Caches are synced for garbage collector
	I0416 16:40:11.850513       1 shared_informer.go:318] Caches are synced for garbage collector
	I0416 16:40:11.850544       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	
	==> kube-proxy [2375d686dc68] <==
	I0416 16:38:18.927671       1 server_others.go:72] "Using iptables proxy"
	I0416 16:38:18.940757       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.95.169"]
	I0416 16:38:18.991917       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 16:38:18.992041       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 16:38:18.992059       1 server_others.go:168] "Using iptables Proxier"
	I0416 16:38:18.996463       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 16:38:18.996948       1 server.go:865] "Version info" version="v1.29.3"
	I0416 16:38:18.997042       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 16:38:18.998569       1 config.go:188] "Starting service config controller"
	I0416 16:38:18.998606       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 16:38:18.998777       1 config.go:97] "Starting endpoint slice config controller"
	I0416 16:38:18.998935       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 16:38:18.999521       1 config.go:315] "Starting node config controller"
	I0416 16:38:18.999548       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 16:38:19.098922       1 shared_informer.go:318] Caches are synced for service config
	I0416 16:38:19.100302       1 shared_informer.go:318] Caches are synced for node config
	I0416 16:38:19.100513       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [d1312f168204] <==
	I0416 16:40:00.829948       1 server_others.go:72] "Using iptables proxy"
	I0416 16:40:00.855896       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.95.169"]
	I0416 16:40:00.952666       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 16:40:00.952687       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 16:40:00.952783       1 server_others.go:168] "Using iptables Proxier"
	I0416 16:40:00.956134       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 16:40:00.956560       1 server.go:865] "Version info" version="v1.29.3"
	I0416 16:40:00.956829       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 16:40:00.961791       1 config.go:188] "Starting service config controller"
	I0416 16:40:00.962590       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 16:40:00.963301       1 config.go:97] "Starting endpoint slice config controller"
	I0416 16:40:00.964146       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 16:40:00.965359       1 config.go:315] "Starting node config controller"
	I0416 16:40:00.966281       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 16:40:01.064311       1 shared_informer.go:318] Caches are synced for service config
	I0416 16:40:01.066664       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 16:40:01.069257       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [cd15ff71bca2] <==
	W0416 16:38:01.305779       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0416 16:38:01.305829       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0416 16:38:02.160387       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 16:38:02.160417       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 16:38:02.191923       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0416 16:38:02.192037       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0416 16:38:02.202297       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 16:38:02.202507       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 16:38:02.267519       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 16:38:02.268836       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 16:38:02.326605       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0416 16:38:02.327007       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0416 16:38:02.335212       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 16:38:02.335399       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 16:38:02.337605       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0416 16:38:02.337870       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0416 16:38:02.510591       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 16:38:02.510955       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 16:38:02.535154       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0416 16:38:02.535331       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0416 16:38:04.580872       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0416 16:39:36.939744       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0416 16:39:36.939786       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0416 16:39:36.959057       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0416 16:39:36.959237       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e90849d26bf6] <==
	I0416 16:39:57.028742       1 serving.go:380] Generated self-signed cert in-memory
	I0416 16:39:58.877700       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0416 16:39:58.877815       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 16:39:58.882824       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0416 16:39:58.882855       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0416 16:39:58.883449       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0416 16:39:58.883476       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0416 16:39:58.883668       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0416 16:39:58.883688       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0416 16:39:58.885140       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0416 16:39:58.885191       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0416 16:39:58.983689       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0416 16:39:58.983732       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0416 16:39:58.983707       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	
	
	==> kubelet <==
	Apr 16 16:39:58 functional-538700 kubelet[4431]: I0416 16:39:58.908720    4431 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 16 16:39:59 functional-538700 kubelet[4431]: E0416 16:39:59.124552    4431 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-functional-538700\" already exists" pod="kube-system/kube-controller-manager-functional-538700"
	Apr 16 16:39:59 functional-538700 kubelet[4431]: I0416 16:39:59.603938    4431 apiserver.go:52] "Watching apiserver"
	Apr 16 16:39:59 functional-538700 kubelet[4431]: I0416 16:39:59.607844    4431 topology_manager.go:215] "Topology Admit Handler" podUID="93b5000d-9b1b-4346-9f8d-73e52b42af0e" podNamespace="kube-system" podName="kube-proxy-29dsg"
	Apr 16 16:39:59 functional-538700 kubelet[4431]: I0416 16:39:59.607965    4431 topology_manager.go:215] "Topology Admit Handler" podUID="0594a7cf-8c07-464b-916b-37290f0328b7" podNamespace="kube-system" podName="coredns-76f75df574-s48fs"
	Apr 16 16:39:59 functional-538700 kubelet[4431]: I0416 16:39:59.608010    4431 topology_manager.go:215] "Topology Admit Handler" podUID="2526ffa5-f4ff-4859-9389-2b1bde0ea350" podNamespace="kube-system" podName="storage-provisioner"
	Apr 16 16:39:59 functional-538700 kubelet[4431]: I0416 16:39:59.626632    4431 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Apr 16 16:39:59 functional-538700 kubelet[4431]: I0416 16:39:59.657772    4431 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93b5000d-9b1b-4346-9f8d-73e52b42af0e-xtables-lock\") pod \"kube-proxy-29dsg\" (UID: \"93b5000d-9b1b-4346-9f8d-73e52b42af0e\") " pod="kube-system/kube-proxy-29dsg"
	Apr 16 16:39:59 functional-538700 kubelet[4431]: I0416 16:39:59.658211    4431 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93b5000d-9b1b-4346-9f8d-73e52b42af0e-lib-modules\") pod \"kube-proxy-29dsg\" (UID: \"93b5000d-9b1b-4346-9f8d-73e52b42af0e\") " pod="kube-system/kube-proxy-29dsg"
	Apr 16 16:39:59 functional-538700 kubelet[4431]: I0416 16:39:59.658272    4431 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2526ffa5-f4ff-4859-9389-2b1bde0ea350-tmp\") pod \"storage-provisioner\" (UID: \"2526ffa5-f4ff-4859-9389-2b1bde0ea350\") " pod="kube-system/storage-provisioner"
	Apr 16 16:40:00 functional-538700 kubelet[4431]: I0416 16:40:00.303757    4431 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3cc9262c21138bea13f7f52b2899b2d374c7ab7b36748b396f865350fb81d74"
	Apr 16 16:40:00 functional-538700 kubelet[4431]: I0416 16:40:00.412733    4431 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fa6974b3dbeeb5f0aa209113559cf17915fbb3b9d91db767025c8c51448a635"
	Apr 16 16:40:00 functional-538700 kubelet[4431]: I0416 16:40:00.739036    4431 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f8fa98064b321d35d94e3b54e8bde87070c80106246754c1defab1cab84341a"
	Apr 16 16:40:02 functional-538700 kubelet[4431]: I0416 16:40:02.812987    4431 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 16 16:40:09 functional-538700 kubelet[4431]: I0416 16:40:09.355214    4431 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 16 16:40:54 functional-538700 kubelet[4431]: E0416 16:40:54.713747    4431 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 16:40:54 functional-538700 kubelet[4431]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 16:40:54 functional-538700 kubelet[4431]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 16:40:54 functional-538700 kubelet[4431]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 16:40:54 functional-538700 kubelet[4431]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 16:41:54 functional-538700 kubelet[4431]: E0416 16:41:54.709870    4431 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 16:41:54 functional-538700 kubelet[4431]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 16:41:54 functional-538700 kubelet[4431]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 16:41:54 functional-538700 kubelet[4431]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 16:41:54 functional-538700 kubelet[4431]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [ca99f0d8588e] <==
	I0416 16:40:00.756210       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0416 16:40:00.785153       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0416 16:40:00.785249       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0416 16:40:18.206085       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0416 16:40:18.206516       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-538700_d2cdf034-f814-4e10-96de-49f660a644ef!
	I0416 16:40:18.206720       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"480beada-9e31-4261-8fbc-88351d2c9dda", APIVersion:"v1", ResourceVersion:"573", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-538700_d2cdf034-f814-4e10-96de-49f660a644ef became leader
	I0416 16:40:18.307199       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-538700_d2cdf034-f814-4e10-96de-49f660a644ef!
	
	
	==> storage-provisioner [edb07b9be793] <==
	I0416 16:38:24.630972       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0416 16:38:24.641918       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0416 16:38:24.642046       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0416 16:38:24.651151       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0416 16:38:24.651376       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-538700_b13fda05-b697-42ba-bc16-20786e3f3086!
	I0416 16:38:24.652468       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"480beada-9e31-4261-8fbc-88351d2c9dda", APIVersion:"v1", ResourceVersion:"424", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-538700_b13fda05-b697-42ba-bc16-20786e3f3086 became leader
	I0416 16:38:24.751858       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-538700_b13fda05-b697-42ba-bc16-20786e3f3086!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 16:41:50.078189    1664 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-538700 -n functional-538700
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-538700 -n functional-538700: (10.8724916s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-538700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (30.16s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-538700 config unset cpus" to be -""- but got *"W0416 16:45:09.766404    5340 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-538700 config get cpus: exit status 14 (249.0133ms)

                                                
                                                
** stderr ** 
	W0416 16:45:10.078584    4588 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-538700 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0416 16:45:10.078584    4588 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-538700 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0416 16:45:10.320889   10596 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-538700 config get cpus" to be -""- but got *"W0416 16:45:10.669798    6412 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-538700 config unset cpus" to be -""- but got *"W0416 16:45:10.938675    8132 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-538700 config get cpus: exit status 14 (245.1301ms)

                                                
                                                
** stderr ** 
	W0416 16:45:11.210095   11636 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-538700 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0416 16:45:11.210095   11636 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-538700 service --namespace=default --https --url hello-node: exit status 1 (15.0201077s)

                                                
                                                
** stderr ** 
	W0416 16:47:21.996772    9684 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1507: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-538700 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-538700 service hello-node --url --format={{.IP}}: exit status 1 (15.0409316s)

                                                
                                                
** stderr ** 
	W0416 16:47:37.043511   10024 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-538700 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1544: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-538700 service hello-node --url: exit status 1 (15.0159573s)

                                                
                                                
** stderr ** 
	W0416 16:47:52.088010    2272 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1557: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-538700 service hello-node --url": exit status 1
functional_test.go:1561: found endpoint for hello-node: 
functional_test.go:1569: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (415.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-022600 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0416 16:56:06.797597    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
E0416 16:56:06.812005    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
E0416 16:56:06.826945    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
E0416 16:56:06.859085    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
E0416 16:56:06.905207    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
E0416 16:56:07.000090    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
E0416 16:56:07.171157    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
E0416 16:56:07.504338    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
E0416 16:56:08.153583    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
E0416 16:56:09.445440    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
E0416 16:56:12.014250    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
E0416 16:56:17.143591    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
E0416 16:56:27.396080    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
E0416 16:56:47.892762    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
E0416 16:57:28.864325    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
E0416 16:58:50.793569    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p ha-022600 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: exit status 90 (6m25.2042254s)

                                                
                                                
-- stdout --
	* [ha-022600] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "ha-022600" primary control-plane node in "ha-022600" cluster
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	* Starting "ha-022600-m02" control-plane node in "ha-022600" cluster
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Found network options:
	  - NO_PROXY=172.19.81.207
	  - NO_PROXY=172.19.81.207
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 16:53:50.072944   12816 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0416 16:53:50.116950   12816 out.go:291] Setting OutFile to fd 784 ...
	I0416 16:53:50.117952   12816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:53:50.117952   12816 out.go:304] Setting ErrFile to fd 696...
	I0416 16:53:50.117952   12816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:53:50.138920   12816 out.go:298] Setting JSON to false
	I0416 16:53:50.141501   12816 start.go:129] hostinfo: {"hostname":"minikube5","uptime":24059,"bootTime":1713262370,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0416 16:53:50.141501   12816 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0416 16:53:50.143700   12816 out.go:177] * [ha-022600] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0416 16:53:50.144387   12816 notify.go:220] Checking for updates...
	I0416 16:53:50.144982   12816 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:53:50.145881   12816 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 16:53:50.146373   12816 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0416 16:53:50.146987   12816 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 16:53:50.147788   12816 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 16:53:50.149250   12816 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 16:53:54.959514   12816 out.go:177] * Using the hyperv driver based on user configuration
	I0416 16:53:54.959811   12816 start.go:297] selected driver: hyperv
	I0416 16:53:54.959811   12816 start.go:901] validating driver "hyperv" against <nil>
	I0416 16:53:54.959811   12816 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 16:53:55.002641   12816 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 16:53:55.003374   12816 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 16:53:55.003816   12816 cni.go:84] Creating CNI manager for ""
	I0416 16:53:55.003816   12816 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0416 16:53:55.003816   12816 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0416 16:53:55.003816   12816 start.go:340] cluster config:
	{Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:53:55.003816   12816 iso.go:125] acquiring lock: {Name:mka3f8eef32f5becd06d05d1d837c2a92a8fa70c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 16:53:55.005700   12816 out.go:177] * Starting "ha-022600" primary control-plane node in "ha-022600" cluster
	I0416 16:53:55.006053   12816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:53:55.006397   12816 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0416 16:53:55.006397   12816 cache.go:56] Caching tarball of preloaded images
	I0416 16:53:55.006539   12816 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 16:53:55.006809   12816 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 16:53:55.007075   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:53:55.007821   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json: {Name:mkc2f9747189bfa0db5ea21e93e1afafc0e89eb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:53:55.008149   12816 start.go:360] acquireMachinesLock for ha-022600: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:53:55.009151   12816 start.go:364] duration metric: took 1.0024ms to acquireMachinesLock for "ha-022600"
	I0416 16:53:55.009151   12816 start.go:93] Provisioning new machine with config: &{Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 16:53:55.009151   12816 start.go:125] createHost starting for "" (driver="hyperv")
	I0416 16:53:55.010175   12816 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 16:53:55.010397   12816 start.go:159] libmachine.API.Create for "ha-022600" (driver="hyperv")
	I0416 16:53:55.010397   12816 client.go:168] LocalClient.Create starting
	I0416 16:53:55.010740   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0416 16:53:55.011023   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:53:55.011023   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:53:55.011200   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0416 16:53:55.011403   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:53:55.011403   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:53:55.011541   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0416 16:53:56.852843   12816 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0416 16:53:56.852843   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:53:56.853713   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0416 16:53:58.346838   12816 main.go:141] libmachine: [stdout =====>] : False
	
	I0416 16:53:58.346838   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:53:58.347399   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:53:59.667129   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:53:59.667129   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:53:59.667644   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:54:02.789332   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:54:02.789332   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:02.791736   12816 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 16:54:03.131710   12816 main.go:141] libmachine: Creating SSH key...
	I0416 16:54:03.273248   12816 main.go:141] libmachine: Creating VM...
	I0416 16:54:03.273248   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:54:05.824835   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:54:05.824835   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:05.824937   12816 main.go:141] libmachine: Using switch "Default Switch"
	I0416 16:54:05.825022   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:54:07.398351   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:54:07.398635   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:07.398635   12816 main.go:141] libmachine: Creating VHD
	I0416 16:54:07.398733   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0416 16:54:10.982944   12816 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : E9EB5342-E929-43B6-8B97-D7BDD354CEE1
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0416 16:54:10.983213   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:10.983213   12816 main.go:141] libmachine: Writing magic tar header
	I0416 16:54:10.983213   12816 main.go:141] libmachine: Writing SSH key tar header
	I0416 16:54:10.992883   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0416 16:54:13.950584   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:13.950584   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:13.950584   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\disk.vhd' -SizeBytes 20000MB
	I0416 16:54:16.287736   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:16.287736   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:16.287736   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-022600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0416 16:54:19.439740   12816 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-022600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0416 16:54:19.439740   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:19.439740   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-022600 -DynamicMemoryEnabled $false
	I0416 16:54:21.396684   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:21.396684   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:21.397696   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-022600 -Count 2
	I0416 16:54:23.301369   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:23.301369   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:23.302296   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-022600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\boot2docker.iso'
	I0416 16:54:25.540957   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:25.540957   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:25.541060   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-022600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\disk.vhd'
	I0416 16:54:27.919093   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:27.919302   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:27.919302   12816 main.go:141] libmachine: Starting VM...
	I0416 16:54:27.919462   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-022600
	I0416 16:54:30.480037   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:30.480279   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:30.480279   12816 main.go:141] libmachine: Waiting for host to start...
	I0416 16:54:30.480279   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:32.483346   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:32.484152   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:32.484152   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:34.785082   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:34.785271   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:35.799483   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:37.788691   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:37.788691   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:37.788898   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:40.058231   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:40.058742   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:41.064074   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:43.063862   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:43.063862   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:43.063862   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:45.301253   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:45.301420   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:46.309647   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:48.337653   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:48.337653   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:48.337653   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:50.614494   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:50.615195   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:51.620909   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:53.639317   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:53.640351   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:53.640405   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:55.942630   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:54:55.943393   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:55.943471   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:57.836545   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:57.836545   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:57.837395   12816 machine.go:94] provisionDockerMachine start ...
	I0416 16:54:57.837474   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:59.762683   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:59.763360   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:59.763440   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:02.003751   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:02.003751   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:02.010689   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:02.023158   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:02.023158   12816 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 16:55:02.152140   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 16:55:02.152244   12816 buildroot.go:166] provisioning hostname "ha-022600"
	I0416 16:55:02.152322   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:03.956913   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:03.956913   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:03.957618   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:06.305236   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:06.305236   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:06.309822   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:06.310484   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:06.310484   12816 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-022600 && echo "ha-022600" | sudo tee /etc/hostname
	I0416 16:55:06.479074   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-022600
	
	I0416 16:55:06.479182   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:08.433073   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:08.433999   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:08.433999   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:10.792893   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:10.792893   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:10.796713   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:10.797321   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:10.797321   12816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-022600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-022600/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-022600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:55:10.944702   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:55:10.944870   12816 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 16:55:10.944983   12816 buildroot.go:174] setting up certificates
	I0416 16:55:10.944983   12816 provision.go:84] configureAuth start
	I0416 16:55:10.945092   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:12.932736   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:12.932736   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:12.933614   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:15.203758   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:15.203758   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:15.203758   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:17.088226   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:17.088226   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:17.088334   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:19.325791   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:19.326294   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:19.326294   12816 provision.go:143] copyHostCerts
	I0416 16:55:19.326294   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 16:55:19.326294   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 16:55:19.326294   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 16:55:19.326900   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 16:55:19.328097   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 16:55:19.328097   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 16:55:19.328097   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 16:55:19.328097   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 16:55:19.329417   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 16:55:19.329417   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 16:55:19.329417   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 16:55:19.330063   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 16:55:19.330726   12816 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-022600 san=[127.0.0.1 172.19.81.207 ha-022600 localhost minikube]
	I0416 16:55:19.539117   12816 provision.go:177] copyRemoteCerts
	I0416 16:55:19.547114   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:55:19.547114   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:21.440985   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:21.440985   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:21.440985   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:23.726564   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:23.726564   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:23.727019   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:55:23.834423   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.287066s)
	I0416 16:55:23.834577   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 16:55:23.835008   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 16:55:23.874966   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 16:55:23.875470   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0416 16:55:23.923921   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 16:55:23.923921   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 16:55:23.965042   12816 provision.go:87] duration metric: took 13.0192422s to configureAuth
	I0416 16:55:23.965042   12816 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:55:23.965741   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:55:23.965827   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:25.905339   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:25.905903   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:25.905986   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:28.170079   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:28.170419   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:28.173356   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:28.173937   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:28.173937   12816 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 16:55:28.301727   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 16:55:28.301727   12816 buildroot.go:70] root file system type: tmpfs
	I0416 16:55:28.302425   12816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 16:55:28.302506   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:30.181808   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:30.181808   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:30.181889   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:32.394860   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:32.394860   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:32.398667   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:32.399299   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:32.399475   12816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 16:55:32.556658   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 16:55:32.556887   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:34.446928   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:34.446969   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:34.447053   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:36.709442   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:36.710242   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:36.714111   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:36.714437   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:36.714437   12816 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 16:55:38.655929   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 16:55:38.655929   12816 machine.go:97] duration metric: took 40.8162201s to provisionDockerMachine
	I0416 16:55:38.656036   12816 client.go:171] duration metric: took 1m43.6397622s to LocalClient.Create
	I0416 16:55:38.656036   12816 start.go:167] duration metric: took 1m43.6397622s to libmachine.API.Create "ha-022600"
	I0416 16:55:38.656036   12816 start.go:293] postStartSetup for "ha-022600" (driver="hyperv")
	I0416 16:55:38.656036   12816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:55:38.665072   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:55:38.665072   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:40.514910   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:40.514910   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:40.515910   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:42.764754   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:42.765404   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:42.765404   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:55:42.879399   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2140881s)
	I0416 16:55:42.892410   12816 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:55:42.899117   12816 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:55:42.899117   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 16:55:42.899734   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 16:55:42.901086   12816 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 16:55:42.901154   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 16:55:42.911237   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 16:55:42.927664   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 16:55:42.975440   12816 start.go:296] duration metric: took 4.3191592s for postStartSetup
	I0416 16:55:42.977201   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:44.830945   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:44.830945   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:44.831562   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:47.134349   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:47.134788   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:47.135000   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:55:47.137270   12816 start.go:128] duration metric: took 1m52.1217609s to createHost
	I0416 16:55:47.137270   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:49.024055   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:49.024055   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:49.024657   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:51.238446   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:51.238526   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:51.242455   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:51.243052   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:51.243052   12816 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0416 16:55:51.369469   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713286551.540248133
	
	I0416 16:55:51.369575   12816 fix.go:216] guest clock: 1713286551.540248133
	I0416 16:55:51.369575   12816 fix.go:229] Guest: 2024-04-16 16:55:51.540248133 +0000 UTC Remote: 2024-04-16 16:55:47.1372703 +0000 UTC m=+117.146546101 (delta=4.402977833s)
	I0416 16:55:51.369790   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:53.407581   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:53.407727   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:53.407727   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:55.663769   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:55.663769   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:55.667543   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:55.667688   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:55.667688   12816 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713286551
	I0416 16:55:55.810591   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 16:55:51 UTC 2024
	
	I0416 16:55:55.810700   12816 fix.go:236] clock set: Tue Apr 16 16:55:51 UTC 2024
	 (err=<nil>)
	I0416 16:55:55.810700   12816 start.go:83] releasing machines lock for "ha-022600", held for 2m0.7946995s
	I0416 16:55:55.810965   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:57.710878   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:57.710878   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:57.711672   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:59.985139   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:59.985210   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:59.988730   12816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:55:59.988803   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:59.998550   12816 ssh_runner.go:195] Run: cat /version.json
	I0416 16:55:59.998550   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:01.993954   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:01.993954   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:01.993954   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:01.995788   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:01.995959   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:01.996084   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:04.379274   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:56:04.379356   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:04.379701   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:56:04.391360   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:56:04.392161   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:04.392520   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:56:04.469159   12816 ssh_runner.go:235] Completed: cat /version.json: (4.4703555s)
	I0416 16:56:04.479363   12816 ssh_runner.go:195] Run: systemctl --version
	I0416 16:56:04.584079   12816 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5950892s)
	I0416 16:56:04.593130   12816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 16:56:04.602217   12816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:56:04.610705   12816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:56:04.639084   12816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 16:56:04.639119   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:56:04.639119   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:56:04.684127   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 16:56:04.713899   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 16:56:04.734297   12816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 16:56:04.745020   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 16:56:04.776657   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:56:04.806087   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 16:56:04.854166   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:56:04.890388   12816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:56:04.918140   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 16:56:04.946595   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 16:56:04.975408   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 16:56:05.001633   12816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:56:05.028505   12816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:56:05.053299   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:05.230466   12816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 16:56:05.260161   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:56:05.269988   12816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 16:56:05.302694   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:56:05.335619   12816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 16:56:05.368663   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:56:05.402792   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:56:05.435612   12816 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 16:56:05.483431   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:56:05.505797   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:56:05.548843   12816 ssh_runner.go:195] Run: which cri-dockerd
	I0416 16:56:05.563980   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 16:56:05.582552   12816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 16:56:05.624048   12816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 16:56:05.804495   12816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 16:56:05.984936   12816 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 16:56:05.985183   12816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 16:56:06.032244   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:06.217075   12816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 16:56:08.662995   12816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4457805s)
	I0416 16:56:08.670977   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0416 16:56:08.701542   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 16:56:08.730698   12816 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0416 16:56:08.941813   12816 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0416 16:56:09.145939   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:09.331138   12816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0416 16:56:09.370232   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 16:56:09.409657   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:09.615575   12816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0416 16:56:09.726879   12816 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0416 16:56:09.737760   12816 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0416 16:56:09.746450   12816 start.go:562] Will wait 60s for crictl version
	I0416 16:56:09.755840   12816 ssh_runner.go:195] Run: which crictl
	I0416 16:56:09.771470   12816 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 16:56:09.827603   12816 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0416 16:56:09.836477   12816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 16:56:09.874967   12816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 16:56:09.907967   12816 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
	I0416 16:56:09.908249   12816 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:6f:a4 Flags:up|broadcast|multicast|running}
	I0416 16:56:09.917049   12816 ip.go:210] interface addr: fe80::6b96:eca7:5afa:def5/64
	I0416 16:56:09.917049   12816 ip.go:210] interface addr: 172.19.80.1/20
	I0416 16:56:09.924842   12816 ssh_runner.go:195] Run: grep 172.19.80.1	host.minikube.internal$ /etc/hosts
	I0416 16:56:09.931830   12816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:56:09.968931   12816 kubeadm.go:877] updating cluster {Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 16:56:09.968931   12816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:56:09.975955   12816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 16:56:09.996899   12816 docker.go:685] Got preloaded images: 
	I0416 16:56:09.996899   12816 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0416 16:56:10.008276   12816 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0416 16:56:10.035609   12816 ssh_runner.go:195] Run: which lz4
	I0416 16:56:10.042582   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0416 16:56:10.050849   12816 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0416 16:56:10.058074   12816 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 16:56:10.058074   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0416 16:56:11.721910   12816 docker.go:649] duration metric: took 1.6789563s to copy over tarball
	I0416 16:56:11.731181   12816 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 16:56:20.333529   12816 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.60186s)
	I0416 16:56:20.333529   12816 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 16:56:20.400516   12816 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0416 16:56:20.419486   12816 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0416 16:56:20.469018   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:20.655543   12816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 16:56:23.229259   12816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5734984s)
	I0416 16:56:23.240705   12816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 16:56:23.262332   12816 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0416 16:56:23.262383   12816 cache_images.go:84] Images are preloaded, skipping loading
	I0416 16:56:23.262383   12816 kubeadm.go:928] updating node { 172.19.81.207 8443 v1.29.3 docker true true} ...
	I0416 16:56:23.262383   12816 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-022600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.81.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 16:56:23.270008   12816 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0416 16:56:23.307277   12816 cni.go:84] Creating CNI manager for ""
	I0416 16:56:23.307277   12816 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0416 16:56:23.307362   12816 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 16:56:23.307406   12816 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.81.207 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-022600 NodeName:ha-022600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.81.207"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.81.207 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 16:56:23.307691   12816 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.81.207
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-022600"
	  kubeletExtraArgs:
	    node-ip: 172.19.81.207
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.81.207"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 16:56:23.307749   12816 kube-vip.go:111] generating kube-vip config ...
	I0416 16:56:23.318492   12816 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0416 16:56:23.343950   12816 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0416 16:56:23.344258   12816 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.95.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0416 16:56:23.353585   12816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 16:56:23.370542   12816 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 16:56:23.379813   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0416 16:56:23.397865   12816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0416 16:56:23.432291   12816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 16:56:23.462868   12816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0416 16:56:23.492579   12816 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1351 bytes)
	I0416 16:56:23.534977   12816 ssh_runner.go:195] Run: grep 172.19.95.254	control-plane.minikube.internal$ /etc/hosts
	I0416 16:56:23.542734   12816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.95.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:56:23.575719   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:23.754395   12816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 16:56:23.781462   12816 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600 for IP: 172.19.81.207
	I0416 16:56:23.781462   12816 certs.go:194] generating shared ca certs ...
	I0416 16:56:23.781462   12816 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:23.782411   12816 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0416 16:56:23.782411   12816 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0416 16:56:23.783651   12816 certs.go:256] generating profile certs ...
	I0416 16:56:23.784402   12816 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.key
	I0416 16:56:23.784569   12816 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.crt with IP's: []
	I0416 16:56:23.984047   12816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.crt ...
	I0416 16:56:23.984047   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.crt: {Name:mk3ebdcb7f076a09a399313f7ed3edf14403a6a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:23.985977   12816 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.key ...
	I0416 16:56:23.985977   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.key: {Name:mk94343a485b04f4b25a0ccd3245e197e7ecbec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:23.986215   12816 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648
	I0416 16:56:23.987265   12816 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.81.207 172.19.95.254]
	I0416 16:56:24.317716   12816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648 ...
	I0416 16:56:24.317716   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648: {Name:mk30f7000427979a1bcf8d6fc3995d1f7ccc170c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.319797   12816 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648 ...
	I0416 16:56:24.319797   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648: {Name:mk95e9e3e0f96031ef005f6c36470c216303a111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.320163   12816 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt
	I0416 16:56:24.331288   12816 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key
	I0416 16:56:24.332214   12816 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key
	I0416 16:56:24.332214   12816 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt with IP's: []
	I0416 16:56:24.406574   12816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt ...
	I0416 16:56:24.406574   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt: {Name:mk73158a02cd8861e90a3b76d50704b360c358ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.407013   12816 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key ...
	I0416 16:56:24.407013   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key: {Name:mk6842e2af8fadaf278ec7592edd5bd96f07c8ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.408078   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 16:56:24.408945   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0416 16:56:24.409732   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0416 16:56:24.417870   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0416 16:56:24.418145   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem (1338 bytes)
	W0416 16:56:24.418533   12816 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324_empty.pem, impossibly tiny 0 bytes
	I0416 16:56:24.418533   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0416 16:56:24.418811   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0416 16:56:24.418990   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0416 16:56:24.419161   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0416 16:56:24.419368   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem (1708 bytes)
	I0416 16:56:24.419647   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /usr/share/ca-certificates/53242.pem
	I0416 16:56:24.419767   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:24.419867   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem -> /usr/share/ca-certificates/5324.pem
	I0416 16:56:24.420003   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 16:56:24.466985   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 16:56:24.509816   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 16:56:24.554817   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 16:56:24.603006   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0416 16:56:24.646596   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 16:56:24.694120   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 16:56:24.741669   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 16:56:24.785888   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /usr/share/ca-certificates/53242.pem (1708 bytes)
	I0416 16:56:24.829403   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 16:56:24.891821   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem --> /usr/share/ca-certificates/5324.pem (1338 bytes)
	I0416 16:56:24.933883   12816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 16:56:24.975091   12816 ssh_runner.go:195] Run: openssl version
	I0416 16:56:24.994129   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/53242.pem && ln -fs /usr/share/ca-certificates/53242.pem /etc/ssl/certs/53242.pem"
	I0416 16:56:25.021821   12816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/53242.pem
	I0416 16:56:25.028512   12816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:35 /usr/share/ca-certificates/53242.pem
	I0416 16:56:25.037989   12816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/53242.pem
	I0416 16:56:25.054924   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/53242.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 16:56:25.080011   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 16:56:25.106815   12816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:25.113980   12816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:25.126339   12816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:25.144599   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 16:56:25.170309   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5324.pem && ln -fs /usr/share/ca-certificates/5324.pem /etc/ssl/certs/5324.pem"
	I0416 16:56:25.199080   12816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5324.pem
	I0416 16:56:25.206080   12816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:35 /usr/share/ca-certificates/5324.pem
	I0416 16:56:25.214031   12816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5324.pem
	I0416 16:56:25.237026   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5324.pem /etc/ssl/certs/51391683.0"
	I0416 16:56:25.266837   12816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 16:56:25.273408   12816 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 16:56:25.273858   12816 kubeadm.go:391] StartCluster: {Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clu
sterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:56:25.281991   12816 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0416 16:56:25.314891   12816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0416 16:56:25.342248   12816 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 16:56:25.368032   12816 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 16:56:25.385737   12816 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 16:56:25.385737   12816 kubeadm.go:156] found existing configuration files:
	
	I0416 16:56:25.393851   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 16:56:25.410393   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 16:56:25.421874   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 16:56:25.453762   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 16:56:25.468769   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 16:56:25.477353   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 16:56:25.501898   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 16:56:25.515888   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 16:56:25.524885   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 16:56:25.548518   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 16:56:25.563660   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 16:56:25.572269   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 16:56:25.587981   12816 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 16:56:25.791977   12816 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 16:56:25.791977   12816 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 16:56:25.958638   12816 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 16:56:25.959035   12816 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 16:56:25.959403   12816 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 16:56:26.228464   12816 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 16:56:26.229544   12816 out.go:204]   - Generating certificates and keys ...
	I0416 16:56:26.229862   12816 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 16:56:26.230882   12816 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 16:56:26.359024   12816 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0416 16:56:26.583044   12816 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0416 16:56:26.715543   12816 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0416 16:56:27.014892   12816 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0416 16:56:27.414264   12816 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0416 16:56:27.414467   12816 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-022600 localhost] and IPs [172.19.81.207 127.0.0.1 ::1]
	I0416 16:56:27.642396   12816 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0416 16:56:27.642770   12816 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-022600 localhost] and IPs [172.19.81.207 127.0.0.1 ::1]
	I0416 16:56:27.844566   12816 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0416 16:56:28.089475   12816 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0416 16:56:28.543900   12816 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0416 16:56:28.548586   12816 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 16:56:29.051829   12816 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 16:56:29.485679   12816 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 16:56:29.830737   12816 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 16:56:30.055972   12816 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 16:56:30.305118   12816 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 16:56:30.305118   12816 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 16:56:30.310446   12816 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 16:56:30.311113   12816 out.go:204]   - Booting up control plane ...
	I0416 16:56:30.311289   12816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 16:56:30.311970   12816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 16:56:30.317049   12816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 16:56:30.342443   12816 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 16:56:30.345140   12816 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 16:56:30.345140   12816 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 16:56:30.526725   12816 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 16:56:37.142045   12816 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.615653 seconds
	I0416 16:56:37.159025   12816 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 16:56:37.175108   12816 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 16:56:37.707867   12816 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 16:56:37.708715   12816 kubeadm.go:309] [mark-control-plane] Marking the node ha-022600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 16:56:38.222729   12816 kubeadm.go:309] [bootstrap-token] Using token: a3r5qn.ikva200bfcppykg5
	I0416 16:56:38.223819   12816 out.go:204]   - Configuring RBAC rules ...
	I0416 16:56:38.224231   12816 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 16:56:38.232416   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 16:56:38.244982   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 16:56:38.249926   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 16:56:38.257723   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 16:56:38.262029   12816 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 16:56:38.279883   12816 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 16:56:38.592701   12816 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 16:56:38.638273   12816 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 16:56:38.639572   12816 kubeadm.go:309] 
	I0416 16:56:38.640154   12816 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 16:56:38.640230   12816 kubeadm.go:309] 
	I0416 16:56:38.640982   12816 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 16:56:38.641038   12816 kubeadm.go:309] 
	I0416 16:56:38.641299   12816 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 16:56:38.641581   12816 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 16:56:38.641765   12816 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 16:56:38.641765   12816 kubeadm.go:309] 
	I0416 16:56:38.641989   12816 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 16:56:38.642031   12816 kubeadm.go:309] 
	I0416 16:56:38.642184   12816 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 16:56:38.642228   12816 kubeadm.go:309] 
	I0416 16:56:38.642350   12816 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 16:56:38.642660   12816 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 16:56:38.642862   12816 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 16:56:38.642900   12816 kubeadm.go:309] 
	I0416 16:56:38.643166   12816 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 16:56:38.643426   12816 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 16:56:38.643426   12816 kubeadm.go:309] 
	I0416 16:56:38.643613   12816 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token a3r5qn.ikva200bfcppykg5 \
	I0416 16:56:38.643867   12816 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c \
	I0416 16:56:38.643909   12816 kubeadm.go:309] 	--control-plane 
	I0416 16:56:38.643961   12816 kubeadm.go:309] 
	I0416 16:56:38.644233   12816 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 16:56:38.644272   12816 kubeadm.go:309] 
	I0416 16:56:38.644444   12816 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token a3r5qn.ikva200bfcppykg5 \
	I0416 16:56:38.644734   12816 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c 
	I0416 16:56:38.647455   12816 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 16:56:38.647488   12816 cni.go:84] Creating CNI manager for ""
	I0416 16:56:38.647539   12816 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0416 16:56:38.648246   12816 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0416 16:56:38.657141   12816 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0416 16:56:38.671263   12816 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0416 16:56:38.671263   12816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0416 16:56:38.722410   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0416 16:56:39.265655   12816 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 16:56:39.279279   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:39.279279   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-022600 minikube.k8s.io/updated_at=2024_04_16T16_56_39_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4 minikube.k8s.io/name=ha-022600 minikube.k8s.io/primary=true
	I0416 16:56:39.290244   12816 ops.go:34] apiserver oom_adj: -16
	I0416 16:56:39.441163   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:39.950155   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:40.453751   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:40.955147   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:41.455931   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:41.953044   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:42.454696   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:42.949299   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:43.454962   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:43.953447   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:44.456402   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:44.956686   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:45.449476   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:45.951602   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:46.451988   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:46.949212   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:47.449356   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:47.950703   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:48.458777   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:48.956811   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:49.456669   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:49.943595   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:50.443906   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:50.950503   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:51.454863   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:51.944285   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:52.083562   12816 kubeadm.go:1107] duration metric: took 12.8170858s to wait for elevateKubeSystemPrivileges
	W0416 16:56:52.083816   12816 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 16:56:52.083816   12816 kubeadm.go:393] duration metric: took 26.808438s to StartCluster
	I0416 16:56:52.083816   12816 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:52.084214   12816 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:56:52.086643   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:52.088384   12816 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 16:56:52.088384   12816 start.go:240] waiting for startup goroutines ...
	I0416 16:56:52.088384   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0416 16:56:52.088384   12816 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 16:56:52.088630   12816 addons.go:69] Setting storage-provisioner=true in profile "ha-022600"
	I0416 16:56:52.088732   12816 addons.go:234] Setting addon storage-provisioner=true in "ha-022600"
	I0416 16:56:52.088732   12816 addons.go:69] Setting default-storageclass=true in profile "ha-022600"
	I0416 16:56:52.088850   12816 host.go:66] Checking if "ha-022600" exists ...
	I0416 16:56:52.088964   12816 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-022600"
	I0416 16:56:52.088964   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:56:52.090289   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:52.090671   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:52.207597   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.80.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0416 16:56:52.469504   12816 start.go:946] {"host.minikube.internal": 172.19.80.1} host record injected into CoreDNS's ConfigMap
	I0416 16:56:54.164683   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:54.164683   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:54.165583   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:54.165635   12816 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:56:54.165635   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:54.166734   12816 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 16:56:54.166340   12816 kapi.go:59] client config for ha-022600: &rest.Config{Host:"https://172.19.95.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-022600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-022600\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef16c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 16:56:54.167133   12816 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 16:56:54.167133   12816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 16:56:54.167133   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:54.167791   12816 cert_rotation.go:137] Starting client certificate rotation controller
	I0416 16:56:54.168180   12816 addons.go:234] Setting addon default-storageclass=true in "ha-022600"
	I0416 16:56:54.168347   12816 host.go:66] Checking if "ha-022600" exists ...
	I0416 16:56:54.169251   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:56.312581   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:56.312988   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:56.313046   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:56.313270   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:56.313270   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:56.313270   12816 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 16:56:56.313270   12816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 16:56:56.313270   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:58.330392   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:58.330392   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:58.330966   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:58.735727   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:56:58.735876   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:58.736103   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:56:58.898469   12816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 16:57:00.675802   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:57:00.675802   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:00.676245   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:57:00.828151   12816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 16:57:01.248041   12816 round_trippers.go:463] GET https://172.19.95.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0416 16:57:01.248041   12816 round_trippers.go:469] Request Headers:
	I0416 16:57:01.248041   12816 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:57:01.248041   12816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:57:01.261890   12816 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0416 16:57:01.262478   12816 round_trippers.go:463] PUT https://172.19.95.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0416 16:57:01.262478   12816 round_trippers.go:469] Request Headers:
	I0416 16:57:01.262478   12816 round_trippers.go:473]     Content-Type: application/json
	I0416 16:57:01.262478   12816 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:57:01.262478   12816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:57:01.268964   12816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 16:57:01.269995   12816 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0416 16:57:01.270495   12816 addons.go:505] duration metric: took 9.181591s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0416 16:57:01.270576   12816 start.go:245] waiting for cluster config update ...
	I0416 16:57:01.270618   12816 start.go:254] writing updated cluster config ...
	I0416 16:57:01.271859   12816 out.go:177] 
	I0416 16:57:01.284169   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:57:01.284169   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:57:01.285951   12816 out.go:177] * Starting "ha-022600-m02" control-plane node in "ha-022600" cluster
	I0416 16:57:01.286952   12816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:57:01.286952   12816 cache.go:56] Caching tarball of preloaded images
	I0416 16:57:01.286952   12816 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 16:57:01.286952   12816 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 16:57:01.286952   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:57:01.296247   12816 start.go:360] acquireMachinesLock for ha-022600-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:57:01.297324   12816 start.go:364] duration metric: took 1.0773ms to acquireMachinesLock for "ha-022600-m02"
	I0416 16:57:01.297559   12816 start.go:93] Provisioning new machine with config: &{Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 16:57:01.297559   12816 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0416 16:57:01.297559   12816 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 16:57:01.297559   12816 start.go:159] libmachine.API.Create for "ha-022600" (driver="hyperv")
	I0416 16:57:01.297559   12816 client.go:168] LocalClient.Create starting
	I0416 16:57:01.298838   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0416 16:57:01.299147   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:57:01.299147   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:57:01.299293   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0416 16:57:01.299468   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:57:01.299468   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:57:01.299468   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0416 16:57:03.017072   12816 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0416 16:57:03.017279   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:03.017366   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0416 16:57:04.580895   12816 main.go:141] libmachine: [stdout =====>] : False
	
	I0416 16:57:04.580895   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:04.580895   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:57:05.984295   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:57:05.984295   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:05.984295   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:57:09.314760   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:57:09.314760   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:09.316740   12816 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 16:57:09.669552   12816 main.go:141] libmachine: Creating SSH key...
	I0416 16:57:10.010472   12816 main.go:141] libmachine: Creating VM...
	I0416 16:57:10.010472   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:57:12.690022   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:57:12.690022   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:12.690107   12816 main.go:141] libmachine: Using switch "Default Switch"
	I0416 16:57:12.690185   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:57:14.267157   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:57:14.267157   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:14.267157   12816 main.go:141] libmachine: Creating VHD
	I0416 16:57:14.267157   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0416 16:57:17.749511   12816 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : FE960248-03C1-43D6-B7AE-A60D4C86308B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0416 16:57:17.749511   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:17.749511   12816 main.go:141] libmachine: Writing magic tar header
	I0416 16:57:17.749511   12816 main.go:141] libmachine: Writing SSH key tar header
	I0416 16:57:17.758158   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0416 16:57:20.709379   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:20.709950   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:20.710019   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\disk.vhd' -SizeBytes 20000MB
	I0416 16:57:23.025729   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:23.025729   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:23.025729   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-022600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0416 16:57:26.131923   12816 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-022600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0416 16:57:26.131923   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:26.131923   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-022600-m02 -DynamicMemoryEnabled $false
	I0416 16:57:28.159153   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:28.159229   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:28.159409   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-022600-m02 -Count 2
	I0416 16:57:30.126033   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:30.126033   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:30.126033   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-022600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\boot2docker.iso'
	I0416 16:57:32.420739   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:32.421735   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:32.421878   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-022600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\disk.vhd'
	I0416 16:57:34.779822   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:34.779822   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:34.779822   12816 main.go:141] libmachine: Starting VM...
	I0416 16:57:34.780971   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-022600-m02
	I0416 16:57:37.369505   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:37.369687   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:37.369687   12816 main.go:141] libmachine: Waiting for host to start...
	I0416 16:57:37.369767   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:39.415029   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:39.415029   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:39.415286   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:41.685132   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:41.685132   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:42.700464   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:44.674039   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:44.674039   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:44.674039   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:46.993492   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:46.993492   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:48.000886   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:49.992438   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:49.992894   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:49.992930   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:52.274971   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:52.274971   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:53.290891   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:55.287716   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:55.287962   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:55.288037   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:57.564053   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:57.564053   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:58.572803   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:00.584542   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:00.584542   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:00.584542   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:02.905327   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:02.905327   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:02.905391   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:04.899133   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:04.899133   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:04.899479   12816 machine.go:94] provisionDockerMachine start ...
	I0416 16:58:04.899479   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:06.914221   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:06.914221   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:06.914869   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:09.273511   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:09.273546   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:09.277783   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:09.278406   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:09.278406   12816 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 16:58:09.413281   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 16:58:09.413281   12816 buildroot.go:166] provisioning hostname "ha-022600-m02"
	I0416 16:58:09.413281   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:11.438626   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:11.438626   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:11.439079   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:13.801181   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:13.801181   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:13.805295   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:13.805684   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:13.805684   12816 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-022600-m02 && echo "ha-022600-m02" | sudo tee /etc/hostname
	I0416 16:58:13.957933   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-022600-m02
	
	I0416 16:58:13.958021   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:15.863768   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:15.863768   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:15.863768   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:18.176996   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:18.178002   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:18.182057   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:18.182681   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:18.182681   12816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-022600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-022600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-022600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:58:18.315751   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:58:18.315853   12816 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 16:58:18.315853   12816 buildroot.go:174] setting up certificates
	I0416 16:58:18.315853   12816 provision.go:84] configureAuth start
	I0416 16:58:18.315853   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:20.243862   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:20.243928   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:20.243928   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:22.525833   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:22.525945   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:22.526057   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:24.418671   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:24.418671   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:24.418894   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:26.735560   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:26.735560   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:26.735560   12816 provision.go:143] copyHostCerts
	I0416 16:58:26.736546   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 16:58:26.736627   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 16:58:26.736627   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 16:58:26.736627   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 16:58:26.737290   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 16:58:26.737900   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 16:58:26.737900   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 16:58:26.738191   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 16:58:26.738908   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 16:58:26.738977   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 16:58:26.738977   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 16:58:26.738977   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 16:58:26.739652   12816 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-022600-m02 san=[127.0.0.1 172.19.80.125 ha-022600-m02 localhost minikube]
	I0416 16:58:26.917277   12816 provision.go:177] copyRemoteCerts
	I0416 16:58:26.926308   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:58:26.926600   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:28.829360   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:28.829360   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:28.830343   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:31.113681   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:31.113681   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:31.113681   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:58:31.229222   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3026703s)
	I0416 16:58:31.229222   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 16:58:31.229700   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 16:58:31.279666   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 16:58:31.280307   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0416 16:58:31.328101   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 16:58:31.328245   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 16:58:31.382563   12816 provision.go:87] duration metric: took 13.065969s to configureAuth
	I0416 16:58:31.382563   12816 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:58:31.383343   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:58:31.383343   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:33.331199   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:33.331199   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:33.331275   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:35.653673   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:35.653721   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:35.656855   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:35.657430   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:35.657430   12816 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 16:58:35.781565   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 16:58:35.781565   12816 buildroot.go:70] root file system type: tmpfs
	I0416 16:58:35.781565   12816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 16:58:35.782090   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:37.695478   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:37.695478   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:37.696344   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:39.956169   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:39.956169   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:39.961057   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:39.961515   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:39.961564   12816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.81.207"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 16:58:40.123664   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.81.207
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 16:58:40.123818   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:42.064878   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:42.064974   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:42.065152   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:44.326252   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:44.326252   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:44.330103   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:44.330731   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:44.330731   12816 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 16:58:46.283136   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 16:58:46.283253   12816 machine.go:97] duration metric: took 41.3814279s to provisionDockerMachine
	I0416 16:58:46.283253   12816 client.go:171] duration metric: took 1m44.9797412s to LocalClient.Create
	I0416 16:58:46.283253   12816 start.go:167] duration metric: took 1m44.9797412s to libmachine.API.Create "ha-022600"
	I0416 16:58:46.283253   12816 start.go:293] postStartSetup for "ha-022600-m02" (driver="hyperv")
	I0416 16:58:46.283345   12816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:58:46.292724   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:58:46.292724   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:48.207625   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:48.207625   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:48.207625   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:50.480821   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:50.480821   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:50.480821   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:58:50.575284   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2823171s)
	I0416 16:58:50.584260   12816 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:58:50.591292   12816 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:58:50.591292   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 16:58:50.591292   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 16:58:50.591900   12816 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 16:58:50.591900   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 16:58:50.601073   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 16:58:50.618807   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 16:58:50.671301   12816 start.go:296] duration metric: took 4.3877068s for postStartSetup
	I0416 16:58:50.673161   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:52.621684   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:52.622252   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:52.622252   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:54.923435   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:54.923435   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:54.923763   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:58:54.926483   12816 start.go:128] duration metric: took 1m53.622481s to createHost
	I0416 16:58:54.926657   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:56.793105   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:56.793105   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:56.793184   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:59.024255   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:59.025184   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:59.029108   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:59.029633   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:59.029730   12816 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0416 16:58:59.149333   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713286739.315259098
	
	I0416 16:58:59.149333   12816 fix.go:216] guest clock: 1713286739.315259098
	I0416 16:58:59.149333   12816 fix.go:229] Guest: 2024-04-16 16:58:59.315259098 +0000 UTC Remote: 2024-04-16 16:58:54.9265716 +0000 UTC m=+304.925199701 (delta=4.388687498s)
	I0416 16:58:59.149333   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:01.054656   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:01.054831   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:01.054831   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:03.303195   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:03.303195   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:03.307071   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:59:03.307459   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:59:03.307531   12816 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713286739
	I0416 16:59:03.449024   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 16:58:59 UTC 2024
	
	I0416 16:59:03.449024   12816 fix.go:236] clock set: Tue Apr 16 16:58:59 UTC 2024
	 (err=<nil>)
	I0416 16:59:03.449024   12816 start.go:83] releasing machines lock for "ha-022600-m02", held for 2m2.1447745s
	I0416 16:59:03.450039   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:05.434998   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:05.434998   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:05.434998   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:07.737918   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:07.737918   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:07.739042   12816 out.go:177] * Found network options:
	I0416 16:59:07.739784   12816 out.go:177]   - NO_PROXY=172.19.81.207
	W0416 16:59:07.740404   12816 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 16:59:07.741027   12816 out.go:177]   - NO_PROXY=172.19.81.207
	W0416 16:59:07.741505   12816 proxy.go:119] fail to check proxy env: Error ip not in block
	W0416 16:59:07.742708   12816 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 16:59:07.744988   12816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:59:07.745153   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:07.752817   12816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0416 16:59:07.752817   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:09.758953   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:12.157582   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:12.158536   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:12.159044   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:59:12.184719   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:12.184719   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:12.185179   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:59:12.257436   12816 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5043642s)
	W0416 16:59:12.257436   12816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:59:12.266545   12816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:59:12.367206   12816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 16:59:12.367296   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:59:12.367330   12816 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6219642s)
	I0416 16:59:12.367330   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:59:12.423201   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 16:59:12.453988   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 16:59:12.472992   12816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 16:59:12.482991   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 16:59:12.510864   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:59:12.538866   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 16:59:12.565866   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:59:12.597751   12816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:59:12.622761   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 16:59:12.648905   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 16:59:12.674904   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 16:59:12.713452   12816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:59:12.741495   12816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:59:12.768497   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:59:12.975524   12816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 16:59:13.011635   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:59:13.023647   12816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 16:59:13.058146   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:59:13.091991   12816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 16:59:13.139058   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:59:13.173081   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:59:13.208242   12816 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 16:59:13.259511   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:59:13.282094   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:59:13.329081   12816 ssh_runner.go:195] Run: which cri-dockerd
	I0416 16:59:13.344832   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 16:59:13.362131   12816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 16:59:13.403377   12816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 16:59:13.597444   12816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 16:59:13.768147   12816 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 16:59:13.768278   12816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 16:59:13.808294   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:59:13.987216   12816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 17:00:15.104612   12816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1138396s)
	I0416 17:00:15.115049   12816 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0416 17:00:15.145752   12816 out.go:177] 
	W0416 17:00:15.146473   12816 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 16 16:58:45 ha-022600-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.076842920Z" level=info msg="Starting up"
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.077687177Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.078706068Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.109665355Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138411128Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138448735Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138508447Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138523049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138600164Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138632670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138848110Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138955930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139030244Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139045347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139142365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139433520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142495192Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142588309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142778845Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142795748Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143044695Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143174419Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143191422Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.152862930Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153144583Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153313214Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153337519Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153354522Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153467543Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153957434Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154159572Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154195179Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154212082Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154230586Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154258491Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154272393Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154287696Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154303599Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154317302Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154330504Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154344107Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154373612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154392516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154406618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154421121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154434024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154447526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154460128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154474031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154498536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154514539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154525841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154555046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154568249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154583952Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154604755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154629960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154642062Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154700973Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154916114Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155014532Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155030135Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155203567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155302486Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155325090Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155706861Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155796078Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155907599Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155947306Z" level=info msg="containerd successfully booted in 0.047582s"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.119001526Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.129323458Z" level=info msg="Loading containers: start."
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.358382320Z" level=info msg="Loading containers: done."
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.377033580Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.377149301Z" level=info msg="Daemon has completed initialization"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.447556885Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 16:58:46 ha-022600-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.449134569Z" level=info msg="API listen on [::]:2376"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.178053148Z" level=info msg="Processing signal 'terminated'"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.179830517Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 16:59:14 ha-022600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.180814055Z" level=info msg="Daemon shutdown complete"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.181020363Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.181054564Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 16:59:15 ha-022600-m02 dockerd[1019]: time="2024-04-16T16:59:15.248212596Z" level=info msg="Starting up"
	Apr 16 17:00:15 ha-022600-m02 dockerd[1019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 16 16:58:45 ha-022600-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.076842920Z" level=info msg="Starting up"
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.077687177Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.078706068Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.109665355Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138411128Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138448735Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138508447Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138523049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138600164Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138632670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138848110Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138955930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139030244Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139045347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139142365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139433520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142495192Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142588309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142778845Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142795748Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143044695Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143174419Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143191422Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.152862930Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153144583Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153313214Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153337519Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153354522Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153467543Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153957434Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154159572Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154195179Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154212082Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154230586Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154258491Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154272393Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154287696Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154303599Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154317302Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154330504Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154344107Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154373612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154392516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154406618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154421121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154434024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154447526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154460128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154474031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154498536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154514539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154525841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154555046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154568249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154583952Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154604755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154629960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154642062Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154700973Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154916114Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155014532Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155030135Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155203567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155302486Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155325090Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155706861Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155796078Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155907599Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155947306Z" level=info msg="containerd successfully booted in 0.047582s"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.119001526Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.129323458Z" level=info msg="Loading containers: start."
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.358382320Z" level=info msg="Loading containers: done."
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.377033580Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.377149301Z" level=info msg="Daemon has completed initialization"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.447556885Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 16:58:46 ha-022600-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.449134569Z" level=info msg="API listen on [::]:2376"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.178053148Z" level=info msg="Processing signal 'terminated'"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.179830517Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 16:59:14 ha-022600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.180814055Z" level=info msg="Daemon shutdown complete"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.181020363Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.181054564Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 16:59:15 ha-022600-m02 dockerd[1019]: time="2024-04-16T16:59:15.248212596Z" level=info msg="Starting up"
	Apr 16 17:00:15 ha-022600-m02 dockerd[1019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0416 17:00:15.146611   12816 out.go:239] * 
	* 
	W0416 17:00:15.147806   12816 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 17:00:15.148383   12816 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-windows-amd64.exe start -p ha-022600 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-022600 -n ha-022600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-022600 -n ha-022600: (10.76873s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-022600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-022600 logs -n 25: (7.3537591s)
helpers_test.go:252: TestMultiControlPlane/serial/StartCluster logs: 
-- stdout --
	
	==> Audit <==
	|----------------|----------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	|    Command     |                           Args                           |      Profile      |       User        |    Version     |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	| image          | functional-538700 image ls                               | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:46 UTC | 16 Apr 24 16:47 UTC |
	| service        | functional-538700 service list                           | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:46 UTC | 16 Apr 24 16:47 UTC |
	| addons         | functional-538700 addons list                            | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:47 UTC | 16 Apr 24 16:47 UTC |
	| addons         | functional-538700 addons list                            | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:47 UTC | 16 Apr 24 16:47 UTC |
	|                | -o json                                                  |                   |                   |                |                     |                     |
	| image          | functional-538700 image save --daemon                    | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:47 UTC | 16 Apr 24 16:47 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-538700 |                   |                   |                |                     |                     |
	|                | --alsologtostderr                                        |                   |                   |                |                     |                     |
	| service        | functional-538700 service list                           | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:47 UTC | 16 Apr 24 16:47 UTC |
	|                | -o json                                                  |                   |                   |                |                     |                     |
	| service        | functional-538700 service                                | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:47 UTC | 16 Apr 24 16:47 UTC |
	|                | hello-node-connect --url                                 |                   |                   |                |                     |                     |
	| service        | functional-538700 service                                | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:47 UTC |                     |
	|                | --namespace=default --https                              |                   |                   |                |                     |                     |
	|                | --url hello-node                                         |                   |                   |                |                     |                     |
	| start          | -p functional-538700                                     | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:47 UTC |                     |
	|                | --dry-run --memory                                       |                   |                   |                |                     |                     |
	|                | 250MB --alsologtostderr                                  |                   |                   |                |                     |                     |
	|                | --driver=hyperv                                          |                   |                   |                |                     |                     |
	| service        | functional-538700                                        | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:47 UTC |                     |
	|                | service hello-node --url                                 |                   |                   |                |                     |                     |
	|                | --format={{.IP}}                                         |                   |                   |                |                     |                     |
	| start          | -p functional-538700                                     | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:47 UTC |                     |
	|                | --dry-run --memory                                       |                   |                   |                |                     |                     |
	|                | 250MB --alsologtostderr                                  |                   |                   |                |                     |                     |
	|                | --driver=hyperv                                          |                   |                   |                |                     |                     |
	| dashboard      | --url --port 36195                                       | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:47 UTC |                     |
	|                | -p functional-538700                                     |                   |                   |                |                     |                     |
	|                | --alsologtostderr -v=1                                   |                   |                   |                |                     |                     |
	| service        | functional-538700 service                                | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:47 UTC |                     |
	|                | hello-node --url                                         |                   |                   |                |                     |                     |
	| update-context | functional-538700                                        | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:47 UTC | 16 Apr 24 16:47 UTC |
	|                | update-context                                           |                   |                   |                |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |                   |                |                     |                     |
	| update-context | functional-538700                                        | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:47 UTC | 16 Apr 24 16:48 UTC |
	|                | update-context                                           |                   |                   |                |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |                   |                |                     |                     |
	| update-context | functional-538700                                        | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:47 UTC | 16 Apr 24 16:48 UTC |
	|                | update-context                                           |                   |                   |                |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |                   |                |                     |                     |
	| image          | functional-538700                                        | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:48 UTC | 16 Apr 24 16:48 UTC |
	|                | image ls --format short                                  |                   |                   |                |                     |                     |
	|                | --alsologtostderr                                        |                   |                   |                |                     |                     |
	| image          | functional-538700                                        | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:48 UTC | 16 Apr 24 16:48 UTC |
	|                | image ls --format json                                   |                   |                   |                |                     |                     |
	|                | --alsologtostderr                                        |                   |                   |                |                     |                     |
	| image          | functional-538700                                        | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:48 UTC | 16 Apr 24 16:48 UTC |
	|                | image ls --format table                                  |                   |                   |                |                     |                     |
	|                | --alsologtostderr                                        |                   |                   |                |                     |                     |
	| image          | functional-538700                                        | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:48 UTC | 16 Apr 24 16:48 UTC |
	|                | image ls --format yaml                                   |                   |                   |                |                     |                     |
	|                | --alsologtostderr                                        |                   |                   |                |                     |                     |
	| ssh            | functional-538700 ssh pgrep                              | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:48 UTC |                     |
	|                | buildkitd                                                |                   |                   |                |                     |                     |
	| image          | functional-538700 image build -t                         | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:48 UTC | 16 Apr 24 16:48 UTC |
	|                | localhost/my-image:functional-538700                     |                   |                   |                |                     |                     |
	|                | testdata\build --alsologtostderr                         |                   |                   |                |                     |                     |
	| image          | functional-538700 image ls                               | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:48 UTC | 16 Apr 24 16:48 UTC |
	| delete         | -p functional-538700                                     | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:52 UTC | 16 Apr 24 16:53 UTC |
	| start          | -p ha-022600 --wait=true                                 | ha-022600         | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:53 UTC |                     |
	|                | --memory=2200 --ha                                       |                   |                   |                |                     |                     |
	|                | -v=7 --alsologtostderr                                   |                   |                   |                |                     |                     |
	|                | --driver=hyperv                                          |                   |                   |                |                     |                     |
	|----------------|----------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 16:53:50
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 16:53:50.116950   12816 out.go:291] Setting OutFile to fd 784 ...
	I0416 16:53:50.117952   12816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:53:50.117952   12816 out.go:304] Setting ErrFile to fd 696...
	I0416 16:53:50.117952   12816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:53:50.138920   12816 out.go:298] Setting JSON to false
	I0416 16:53:50.141501   12816 start.go:129] hostinfo: {"hostname":"minikube5","uptime":24059,"bootTime":1713262370,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0416 16:53:50.141501   12816 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0416 16:53:50.143700   12816 out.go:177] * [ha-022600] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0416 16:53:50.144387   12816 notify.go:220] Checking for updates...
	I0416 16:53:50.144982   12816 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:53:50.145881   12816 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 16:53:50.146373   12816 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0416 16:53:50.146987   12816 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 16:53:50.147788   12816 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 16:53:50.149250   12816 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 16:53:54.959514   12816 out.go:177] * Using the hyperv driver based on user configuration
	I0416 16:53:54.959811   12816 start.go:297] selected driver: hyperv
	I0416 16:53:54.959811   12816 start.go:901] validating driver "hyperv" against <nil>
	I0416 16:53:54.959811   12816 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 16:53:55.002641   12816 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 16:53:55.003374   12816 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 16:53:55.003816   12816 cni.go:84] Creating CNI manager for ""
	I0416 16:53:55.003816   12816 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0416 16:53:55.003816   12816 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0416 16:53:55.003816   12816 start.go:340] cluster config:
	{Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:53:55.003816   12816 iso.go:125] acquiring lock: {Name:mka3f8eef32f5becd06d05d1d837c2a92a8fa70c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 16:53:55.005700   12816 out.go:177] * Starting "ha-022600" primary control-plane node in "ha-022600" cluster
	I0416 16:53:55.006053   12816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:53:55.006397   12816 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0416 16:53:55.006397   12816 cache.go:56] Caching tarball of preloaded images
	I0416 16:53:55.006539   12816 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 16:53:55.006809   12816 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 16:53:55.007075   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:53:55.007821   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json: {Name:mkc2f9747189bfa0db5ea21e93e1afafc0e89eb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:53:55.008149   12816 start.go:360] acquireMachinesLock for ha-022600: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:53:55.009151   12816 start.go:364] duration metric: took 1.0024ms to acquireMachinesLock for "ha-022600"
	I0416 16:53:55.009151   12816 start.go:93] Provisioning new machine with config: &{Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 16:53:55.009151   12816 start.go:125] createHost starting for "" (driver="hyperv")
	I0416 16:53:55.010175   12816 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 16:53:55.010397   12816 start.go:159] libmachine.API.Create for "ha-022600" (driver="hyperv")
	I0416 16:53:55.010397   12816 client.go:168] LocalClient.Create starting
	I0416 16:53:55.010740   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0416 16:53:55.011023   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:53:55.011023   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:53:55.011200   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0416 16:53:55.011403   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:53:55.011403   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:53:55.011541   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0416 16:53:56.852843   12816 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0416 16:53:56.852843   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:53:56.853713   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0416 16:53:58.346838   12816 main.go:141] libmachine: [stdout =====>] : False
	
	I0416 16:53:58.346838   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:53:58.347399   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:53:59.667129   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:53:59.667129   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:53:59.667644   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:54:02.789332   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:54:02.789332   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:02.791736   12816 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 16:54:03.131710   12816 main.go:141] libmachine: Creating SSH key...
	I0416 16:54:03.273248   12816 main.go:141] libmachine: Creating VM...
	I0416 16:54:03.273248   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:54:05.824835   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:54:05.824835   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:05.824937   12816 main.go:141] libmachine: Using switch "Default Switch"
	I0416 16:54:05.825022   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:54:07.398351   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:54:07.398635   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:07.398635   12816 main.go:141] libmachine: Creating VHD
	I0416 16:54:07.398733   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0416 16:54:10.982944   12816 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : E9EB5342-E929-43B6-8B97-D7BDD354CEE1
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0416 16:54:10.983213   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:10.983213   12816 main.go:141] libmachine: Writing magic tar header
	I0416 16:54:10.983213   12816 main.go:141] libmachine: Writing SSH key tar header
	I0416 16:54:10.992883   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0416 16:54:13.950584   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:13.950584   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:13.950584   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\disk.vhd' -SizeBytes 20000MB
	I0416 16:54:16.287736   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:16.287736   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:16.287736   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-022600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0416 16:54:19.439740   12816 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-022600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0416 16:54:19.439740   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:19.439740   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-022600 -DynamicMemoryEnabled $false
	I0416 16:54:21.396684   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:21.396684   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:21.397696   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-022600 -Count 2
	I0416 16:54:23.301369   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:23.301369   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:23.302296   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-022600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\boot2docker.iso'
	I0416 16:54:25.540957   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:25.540957   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:25.541060   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-022600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\disk.vhd'
	I0416 16:54:27.919093   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:27.919302   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:27.919302   12816 main.go:141] libmachine: Starting VM...
	I0416 16:54:27.919462   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-022600
	I0416 16:54:30.480037   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:30.480279   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:30.480279   12816 main.go:141] libmachine: Waiting for host to start...
	I0416 16:54:30.480279   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:32.483346   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:32.484152   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:32.484152   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:34.785082   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:34.785271   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:35.799483   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:37.788691   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:37.788691   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:37.788898   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:40.058231   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:40.058742   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:41.064074   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:43.063862   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:43.063862   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:43.063862   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:45.301253   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:45.301420   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:46.309647   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:48.337653   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:48.337653   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:48.337653   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:50.614494   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:50.615195   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:51.620909   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:53.639317   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:53.640351   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:53.640405   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:55.942630   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:54:55.943393   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:55.943471   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:57.836545   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:57.836545   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:57.837395   12816 machine.go:94] provisionDockerMachine start ...
	I0416 16:54:57.837474   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:59.762683   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:59.763360   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:59.763440   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:02.003751   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:02.003751   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:02.010689   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:02.023158   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:02.023158   12816 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 16:55:02.152140   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 16:55:02.152244   12816 buildroot.go:166] provisioning hostname "ha-022600"
	I0416 16:55:02.152322   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:03.956913   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:03.956913   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:03.957618   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:06.305236   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:06.305236   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:06.309822   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:06.310484   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:06.310484   12816 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-022600 && echo "ha-022600" | sudo tee /etc/hostname
	I0416 16:55:06.479074   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-022600
	
	I0416 16:55:06.479182   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:08.433073   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:08.433999   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:08.433999   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:10.792893   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:10.792893   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:10.796713   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:10.797321   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:10.797321   12816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-022600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-022600/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-022600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:55:10.944702   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:55:10.944870   12816 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 16:55:10.944983   12816 buildroot.go:174] setting up certificates
	I0416 16:55:10.944983   12816 provision.go:84] configureAuth start
	I0416 16:55:10.945092   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:12.932736   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:12.932736   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:12.933614   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:15.203758   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:15.203758   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:15.203758   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:17.088226   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:17.088226   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:17.088334   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:19.325791   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:19.326294   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:19.326294   12816 provision.go:143] copyHostCerts
	I0416 16:55:19.326294   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 16:55:19.326294   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 16:55:19.326294   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 16:55:19.326900   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 16:55:19.328097   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 16:55:19.328097   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 16:55:19.328097   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 16:55:19.328097   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 16:55:19.329417   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 16:55:19.329417   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 16:55:19.329417   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 16:55:19.330063   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 16:55:19.330726   12816 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-022600 san=[127.0.0.1 172.19.81.207 ha-022600 localhost minikube]
	I0416 16:55:19.539117   12816 provision.go:177] copyRemoteCerts
	I0416 16:55:19.547114   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:55:19.547114   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:21.440985   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:21.440985   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:21.440985   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:23.726564   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:23.726564   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:23.727019   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:55:23.834423   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.287066s)
	I0416 16:55:23.834577   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 16:55:23.835008   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 16:55:23.874966   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 16:55:23.875470   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0416 16:55:23.923921   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 16:55:23.923921   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 16:55:23.965042   12816 provision.go:87] duration metric: took 13.0192422s to configureAuth
	I0416 16:55:23.965042   12816 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:55:23.965741   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:55:23.965827   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:25.905339   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:25.905903   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:25.905986   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:28.170079   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:28.170419   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:28.173356   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:28.173937   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:28.173937   12816 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 16:55:28.301727   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 16:55:28.301727   12816 buildroot.go:70] root file system type: tmpfs
	I0416 16:55:28.302425   12816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 16:55:28.302506   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:30.181808   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:30.181808   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:30.181889   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:32.394860   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:32.394860   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:32.398667   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:32.399299   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:32.399475   12816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 16:55:32.556658   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 16:55:32.556887   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:34.446928   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:34.446969   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:34.447053   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:36.709442   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:36.710242   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:36.714111   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:36.714437   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:36.714437   12816 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 16:55:38.655929   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 16:55:38.655929   12816 machine.go:97] duration metric: took 40.8162201s to provisionDockerMachine
	I0416 16:55:38.656036   12816 client.go:171] duration metric: took 1m43.6397622s to LocalClient.Create
	I0416 16:55:38.656036   12816 start.go:167] duration metric: took 1m43.6397622s to libmachine.API.Create "ha-022600"
	I0416 16:55:38.656036   12816 start.go:293] postStartSetup for "ha-022600" (driver="hyperv")
	I0416 16:55:38.656036   12816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:55:38.665072   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:55:38.665072   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:40.514910   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:40.514910   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:40.515910   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:42.764754   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:42.765404   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:42.765404   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:55:42.879399   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2140881s)
	I0416 16:55:42.892410   12816 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:55:42.899117   12816 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:55:42.899117   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 16:55:42.899734   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 16:55:42.901086   12816 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 16:55:42.901154   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 16:55:42.911237   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 16:55:42.927664   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 16:55:42.975440   12816 start.go:296] duration metric: took 4.3191592s for postStartSetup
	I0416 16:55:42.977201   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:44.830945   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:44.830945   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:44.831562   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:47.134349   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:47.134788   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:47.135000   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:55:47.137270   12816 start.go:128] duration metric: took 1m52.1217609s to createHost
	I0416 16:55:47.137270   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:49.024055   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:49.024055   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:49.024657   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:51.238446   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:51.238526   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:51.242455   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:51.243052   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:51.243052   12816 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 16:55:51.369469   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713286551.540248133
	
	I0416 16:55:51.369575   12816 fix.go:216] guest clock: 1713286551.540248133
	I0416 16:55:51.369575   12816 fix.go:229] Guest: 2024-04-16 16:55:51.540248133 +0000 UTC Remote: 2024-04-16 16:55:47.1372703 +0000 UTC m=+117.146546101 (delta=4.402977833s)
	I0416 16:55:51.369790   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:53.407581   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:53.407727   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:53.407727   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:55.663769   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:55.663769   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:55.667543   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:55.667688   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:55.667688   12816 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713286551
	I0416 16:55:55.810591   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 16:55:51 UTC 2024
	
	I0416 16:55:55.810700   12816 fix.go:236] clock set: Tue Apr 16 16:55:51 UTC 2024
	 (err=<nil>)
	I0416 16:55:55.810700   12816 start.go:83] releasing machines lock for "ha-022600", held for 2m0.7946995s
	I0416 16:55:55.810965   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:57.710878   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:57.710878   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:57.711672   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:59.985139   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:59.985210   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:59.988730   12816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:55:59.988803   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:59.998550   12816 ssh_runner.go:195] Run: cat /version.json
	I0416 16:55:59.998550   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:01.993954   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:01.993954   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:01.993954   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:01.995788   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:01.995959   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:01.996084   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:04.379274   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:56:04.379356   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:04.379701   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:56:04.391360   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:56:04.392161   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:04.392520   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:56:04.469159   12816 ssh_runner.go:235] Completed: cat /version.json: (4.4703555s)
	I0416 16:56:04.479363   12816 ssh_runner.go:195] Run: systemctl --version
	I0416 16:56:04.584079   12816 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5950892s)
	I0416 16:56:04.593130   12816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 16:56:04.602217   12816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:56:04.610705   12816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:56:04.639084   12816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 16:56:04.639119   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:56:04.639119   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:56:04.684127   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 16:56:04.713899   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 16:56:04.734297   12816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 16:56:04.745020   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 16:56:04.776657   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:56:04.806087   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 16:56:04.854166   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:56:04.890388   12816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:56:04.918140   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 16:56:04.946595   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 16:56:04.975408   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 16:56:05.001633   12816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:56:05.028505   12816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:56:05.053299   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:05.230466   12816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 16:56:05.260161   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:56:05.269988   12816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 16:56:05.302694   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:56:05.335619   12816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 16:56:05.368663   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:56:05.402792   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:56:05.435612   12816 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 16:56:05.483431   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:56:05.505797   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:56:05.548843   12816 ssh_runner.go:195] Run: which cri-dockerd
	I0416 16:56:05.563980   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 16:56:05.582552   12816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 16:56:05.624048   12816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 16:56:05.804495   12816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 16:56:05.984936   12816 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 16:56:05.985183   12816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 16:56:06.032244   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:06.217075   12816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 16:56:08.662995   12816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4457805s)
	I0416 16:56:08.670977   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0416 16:56:08.701542   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 16:56:08.730698   12816 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0416 16:56:08.941813   12816 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0416 16:56:09.145939   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:09.331138   12816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0416 16:56:09.370232   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 16:56:09.409657   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:09.615575   12816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0416 16:56:09.726879   12816 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0416 16:56:09.737760   12816 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0416 16:56:09.746450   12816 start.go:562] Will wait 60s for crictl version
	I0416 16:56:09.755840   12816 ssh_runner.go:195] Run: which crictl
	I0416 16:56:09.771470   12816 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 16:56:09.827603   12816 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0416 16:56:09.836477   12816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 16:56:09.874967   12816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 16:56:09.907967   12816 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
	I0416 16:56:09.908249   12816 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:6f:a4 Flags:up|broadcast|multicast|running}
	I0416 16:56:09.917049   12816 ip.go:210] interface addr: fe80::6b96:eca7:5afa:def5/64
	I0416 16:56:09.917049   12816 ip.go:210] interface addr: 172.19.80.1/20
	I0416 16:56:09.924842   12816 ssh_runner.go:195] Run: grep 172.19.80.1	host.minikube.internal$ /etc/hosts
	I0416 16:56:09.931830   12816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:56:09.968931   12816 kubeadm.go:877] updating cluster {Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 16:56:09.968931   12816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:56:09.975955   12816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 16:56:09.996899   12816 docker.go:685] Got preloaded images: 
	I0416 16:56:09.996899   12816 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0416 16:56:10.008276   12816 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0416 16:56:10.035609   12816 ssh_runner.go:195] Run: which lz4
	I0416 16:56:10.042582   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0416 16:56:10.050849   12816 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 16:56:10.058074   12816 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 16:56:10.058074   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0416 16:56:11.721910   12816 docker.go:649] duration metric: took 1.6789563s to copy over tarball
	I0416 16:56:11.731181   12816 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 16:56:20.333529   12816 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.60186s)
	I0416 16:56:20.333529   12816 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 16:56:20.400516   12816 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0416 16:56:20.419486   12816 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0416 16:56:20.469018   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:20.655543   12816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 16:56:23.229259   12816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5734984s)
	I0416 16:56:23.240705   12816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 16:56:23.262332   12816 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0416 16:56:23.262383   12816 cache_images.go:84] Images are preloaded, skipping loading
	I0416 16:56:23.262383   12816 kubeadm.go:928] updating node { 172.19.81.207 8443 v1.29.3 docker true true} ...
	I0416 16:56:23.262383   12816 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-022600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.81.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 16:56:23.270008   12816 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0416 16:56:23.307277   12816 cni.go:84] Creating CNI manager for ""
	I0416 16:56:23.307277   12816 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0416 16:56:23.307362   12816 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 16:56:23.307406   12816 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.81.207 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-022600 NodeName:ha-022600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.81.207"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.81.207 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 16:56:23.307691   12816 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.81.207
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-022600"
	  kubeletExtraArgs:
	    node-ip: 172.19.81.207
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.81.207"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 16:56:23.307749   12816 kube-vip.go:111] generating kube-vip config ...
	I0416 16:56:23.318492   12816 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0416 16:56:23.343950   12816 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0416 16:56:23.344258   12816 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.95.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0416 16:56:23.353585   12816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 16:56:23.370542   12816 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 16:56:23.379813   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0416 16:56:23.397865   12816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0416 16:56:23.432291   12816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 16:56:23.462868   12816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0416 16:56:23.492579   12816 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1351 bytes)
	I0416 16:56:23.534977   12816 ssh_runner.go:195] Run: grep 172.19.95.254	control-plane.minikube.internal$ /etc/hosts
	I0416 16:56:23.542734   12816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.95.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:56:23.575719   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:23.754395   12816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 16:56:23.781462   12816 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600 for IP: 172.19.81.207
	I0416 16:56:23.781462   12816 certs.go:194] generating shared ca certs ...
	I0416 16:56:23.781462   12816 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:23.782411   12816 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0416 16:56:23.782411   12816 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0416 16:56:23.783651   12816 certs.go:256] generating profile certs ...
	I0416 16:56:23.784402   12816 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.key
	I0416 16:56:23.784569   12816 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.crt with IP's: []
	I0416 16:56:23.984047   12816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.crt ...
	I0416 16:56:23.984047   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.crt: {Name:mk3ebdcb7f076a09a399313f7ed3edf14403a6a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:23.985977   12816 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.key ...
	I0416 16:56:23.985977   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.key: {Name:mk94343a485b04f4b25a0ccd3245e197e7ecbec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:23.986215   12816 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648
	I0416 16:56:23.987265   12816 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.81.207 172.19.95.254]
	I0416 16:56:24.317716   12816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648 ...
	I0416 16:56:24.317716   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648: {Name:mk30f7000427979a1bcf8d6fc3995d1f7ccc170c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.319797   12816 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648 ...
	I0416 16:56:24.319797   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648: {Name:mk95e9e3e0f96031ef005f6c36470c216303a111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.320163   12816 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt
	I0416 16:56:24.331288   12816 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key
	I0416 16:56:24.332214   12816 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key
	I0416 16:56:24.332214   12816 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt with IP's: []
	I0416 16:56:24.406574   12816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt ...
	I0416 16:56:24.406574   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt: {Name:mk73158a02cd8861e90a3b76d50704b360c358ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.407013   12816 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key ...
	I0416 16:56:24.407013   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key: {Name:mk6842e2af8fadaf278ec7592edd5bd96f07c8ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.408078   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 16:56:24.408945   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0416 16:56:24.409732   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0416 16:56:24.417870   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0416 16:56:24.418145   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem (1338 bytes)
	W0416 16:56:24.418533   12816 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324_empty.pem, impossibly tiny 0 bytes
	I0416 16:56:24.418533   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0416 16:56:24.418811   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0416 16:56:24.418990   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0416 16:56:24.419161   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0416 16:56:24.419368   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem (1708 bytes)
	I0416 16:56:24.419647   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /usr/share/ca-certificates/53242.pem
	I0416 16:56:24.419767   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:24.419867   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem -> /usr/share/ca-certificates/5324.pem
	I0416 16:56:24.420003   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 16:56:24.466985   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 16:56:24.509816   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 16:56:24.554817   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 16:56:24.603006   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0416 16:56:24.646596   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 16:56:24.694120   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 16:56:24.741669   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 16:56:24.785888   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /usr/share/ca-certificates/53242.pem (1708 bytes)
	I0416 16:56:24.829403   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 16:56:24.891821   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem --> /usr/share/ca-certificates/5324.pem (1338 bytes)
	I0416 16:56:24.933883   12816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 16:56:24.975091   12816 ssh_runner.go:195] Run: openssl version
	I0416 16:56:24.994129   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/53242.pem && ln -fs /usr/share/ca-certificates/53242.pem /etc/ssl/certs/53242.pem"
	I0416 16:56:25.021821   12816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/53242.pem
	I0416 16:56:25.028512   12816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:35 /usr/share/ca-certificates/53242.pem
	I0416 16:56:25.037989   12816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/53242.pem
	I0416 16:56:25.054924   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/53242.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 16:56:25.080011   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 16:56:25.106815   12816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:25.113980   12816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:25.126339   12816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:25.144599   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 16:56:25.170309   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5324.pem && ln -fs /usr/share/ca-certificates/5324.pem /etc/ssl/certs/5324.pem"
	I0416 16:56:25.199080   12816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5324.pem
	I0416 16:56:25.206080   12816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:35 /usr/share/ca-certificates/5324.pem
	I0416 16:56:25.214031   12816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5324.pem
	I0416 16:56:25.237026   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5324.pem /etc/ssl/certs/51391683.0"
	I0416 16:56:25.266837   12816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 16:56:25.273408   12816 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 16:56:25.273858   12816 kubeadm.go:391] StartCluster: {Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clu
sterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:56:25.281991   12816 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0416 16:56:25.314891   12816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0416 16:56:25.342248   12816 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 16:56:25.368032   12816 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 16:56:25.385737   12816 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 16:56:25.385737   12816 kubeadm.go:156] found existing configuration files:
	
	I0416 16:56:25.393851   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 16:56:25.410393   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 16:56:25.421874   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 16:56:25.453762   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 16:56:25.468769   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 16:56:25.477353   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 16:56:25.501898   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 16:56:25.515888   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 16:56:25.524885   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 16:56:25.548518   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 16:56:25.563660   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 16:56:25.572269   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 16:56:25.587981   12816 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 16:56:25.791977   12816 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 16:56:25.791977   12816 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 16:56:25.958638   12816 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 16:56:25.959035   12816 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 16:56:25.959403   12816 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 16:56:26.228464   12816 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 16:56:26.229544   12816 out.go:204]   - Generating certificates and keys ...
	I0416 16:56:26.229862   12816 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 16:56:26.230882   12816 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 16:56:26.359024   12816 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0416 16:56:26.583044   12816 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0416 16:56:26.715543   12816 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0416 16:56:27.014892   12816 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0416 16:56:27.414264   12816 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0416 16:56:27.414467   12816 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-022600 localhost] and IPs [172.19.81.207 127.0.0.1 ::1]
	I0416 16:56:27.642396   12816 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0416 16:56:27.642770   12816 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-022600 localhost] and IPs [172.19.81.207 127.0.0.1 ::1]
	I0416 16:56:27.844566   12816 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0416 16:56:28.089475   12816 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0416 16:56:28.543900   12816 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0416 16:56:28.548586   12816 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 16:56:29.051829   12816 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 16:56:29.485679   12816 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 16:56:29.830737   12816 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 16:56:30.055972   12816 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 16:56:30.305118   12816 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 16:56:30.305118   12816 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 16:56:30.310446   12816 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 16:56:30.311113   12816 out.go:204]   - Booting up control plane ...
	I0416 16:56:30.311289   12816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 16:56:30.311970   12816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 16:56:30.317049   12816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 16:56:30.342443   12816 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 16:56:30.345140   12816 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 16:56:30.345140   12816 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 16:56:30.526725   12816 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 16:56:37.142045   12816 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.615653 seconds
	I0416 16:56:37.159025   12816 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 16:56:37.175108   12816 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 16:56:37.707867   12816 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 16:56:37.708715   12816 kubeadm.go:309] [mark-control-plane] Marking the node ha-022600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 16:56:38.222729   12816 kubeadm.go:309] [bootstrap-token] Using token: a3r5qn.ikva200bfcppykg5
	I0416 16:56:38.223819   12816 out.go:204]   - Configuring RBAC rules ...
	I0416 16:56:38.224231   12816 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 16:56:38.232416   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 16:56:38.244982   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 16:56:38.249926   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 16:56:38.257723   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 16:56:38.262029   12816 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 16:56:38.279883   12816 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 16:56:38.592701   12816 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 16:56:38.638273   12816 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 16:56:38.639572   12816 kubeadm.go:309] 
	I0416 16:56:38.640154   12816 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 16:56:38.640230   12816 kubeadm.go:309] 
	I0416 16:56:38.640982   12816 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 16:56:38.641038   12816 kubeadm.go:309] 
	I0416 16:56:38.641299   12816 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 16:56:38.641581   12816 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 16:56:38.641765   12816 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 16:56:38.641765   12816 kubeadm.go:309] 
	I0416 16:56:38.641989   12816 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 16:56:38.642031   12816 kubeadm.go:309] 
	I0416 16:56:38.642184   12816 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 16:56:38.642228   12816 kubeadm.go:309] 
	I0416 16:56:38.642350   12816 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 16:56:38.642660   12816 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 16:56:38.642862   12816 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 16:56:38.642900   12816 kubeadm.go:309] 
	I0416 16:56:38.643166   12816 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 16:56:38.643426   12816 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 16:56:38.643426   12816 kubeadm.go:309] 
	I0416 16:56:38.643613   12816 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token a3r5qn.ikva200bfcppykg5 \
	I0416 16:56:38.643867   12816 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c \
	I0416 16:56:38.643909   12816 kubeadm.go:309] 	--control-plane 
	I0416 16:56:38.643961   12816 kubeadm.go:309] 
	I0416 16:56:38.644233   12816 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 16:56:38.644272   12816 kubeadm.go:309] 
	I0416 16:56:38.644444   12816 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token a3r5qn.ikva200bfcppykg5 \
	I0416 16:56:38.644734   12816 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c 
	I0416 16:56:38.647455   12816 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 16:56:38.647488   12816 cni.go:84] Creating CNI manager for ""
	I0416 16:56:38.647539   12816 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0416 16:56:38.648246   12816 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0416 16:56:38.657141   12816 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0416 16:56:38.671263   12816 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0416 16:56:38.671263   12816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0416 16:56:38.722410   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0416 16:56:39.265655   12816 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 16:56:39.279279   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:39.279279   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-022600 minikube.k8s.io/updated_at=2024_04_16T16_56_39_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4 minikube.k8s.io/name=ha-022600 minikube.k8s.io/primary=true
	I0416 16:56:39.290244   12816 ops.go:34] apiserver oom_adj: -16
	I0416 16:56:39.441163   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:39.950155   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:40.453751   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:40.955147   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:41.455931   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:41.953044   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:42.454696   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:42.949299   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:43.454962   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:43.953447   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:44.456402   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:44.956686   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:45.449476   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:45.951602   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:46.451988   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:46.949212   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:47.449356   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:47.950703   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:48.458777   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:48.956811   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:49.456669   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:49.943595   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:50.443906   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:50.950503   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:51.454863   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:51.944285   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:52.083562   12816 kubeadm.go:1107] duration metric: took 12.8170858s to wait for elevateKubeSystemPrivileges
	W0416 16:56:52.083816   12816 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 16:56:52.083816   12816 kubeadm.go:393] duration metric: took 26.808438s to StartCluster
	I0416 16:56:52.083816   12816 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:52.084214   12816 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:56:52.086643   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:52.088384   12816 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 16:56:52.088384   12816 start.go:240] waiting for startup goroutines ...
	I0416 16:56:52.088384   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0416 16:56:52.088384   12816 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 16:56:52.088630   12816 addons.go:69] Setting storage-provisioner=true in profile "ha-022600"
	I0416 16:56:52.088732   12816 addons.go:234] Setting addon storage-provisioner=true in "ha-022600"
	I0416 16:56:52.088732   12816 addons.go:69] Setting default-storageclass=true in profile "ha-022600"
	I0416 16:56:52.088850   12816 host.go:66] Checking if "ha-022600" exists ...
	I0416 16:56:52.088964   12816 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-022600"
	I0416 16:56:52.088964   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:56:52.090289   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:52.090671   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:52.207597   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.80.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0416 16:56:52.469504   12816 start.go:946] {"host.minikube.internal": 172.19.80.1} host record injected into CoreDNS's ConfigMap
	I0416 16:56:54.164683   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:54.164683   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:54.165583   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:54.165635   12816 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:56:54.165635   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:54.166734   12816 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 16:56:54.166340   12816 kapi.go:59] client config for ha-022600: &rest.Config{Host:"https://172.19.95.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-022600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-022600\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef16c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 16:56:54.167133   12816 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 16:56:54.167133   12816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 16:56:54.167133   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:54.167791   12816 cert_rotation.go:137] Starting client certificate rotation controller
	I0416 16:56:54.168180   12816 addons.go:234] Setting addon default-storageclass=true in "ha-022600"
	I0416 16:56:54.168347   12816 host.go:66] Checking if "ha-022600" exists ...
	I0416 16:56:54.169251   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:56.312581   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:56.312988   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:56.313046   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:56.313270   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:56.313270   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:56.313270   12816 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 16:56:56.313270   12816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 16:56:56.313270   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:58.330392   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:58.330392   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:58.330966   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:58.735727   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:56:58.735876   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:58.736103   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:56:58.898469   12816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 16:57:00.675802   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:57:00.675802   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:00.676245   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:57:00.828151   12816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 16:57:01.248041   12816 round_trippers.go:463] GET https://172.19.95.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0416 16:57:01.248041   12816 round_trippers.go:469] Request Headers:
	I0416 16:57:01.248041   12816 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:57:01.248041   12816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:57:01.261890   12816 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0416 16:57:01.262478   12816 round_trippers.go:463] PUT https://172.19.95.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0416 16:57:01.262478   12816 round_trippers.go:469] Request Headers:
	I0416 16:57:01.262478   12816 round_trippers.go:473]     Content-Type: application/json
	I0416 16:57:01.262478   12816 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:57:01.262478   12816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:57:01.268964   12816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 16:57:01.269995   12816 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0416 16:57:01.270495   12816 addons.go:505] duration metric: took 9.181591s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0416 16:57:01.270576   12816 start.go:245] waiting for cluster config update ...
	I0416 16:57:01.270618   12816 start.go:254] writing updated cluster config ...
	I0416 16:57:01.271859   12816 out.go:177] 
	I0416 16:57:01.284169   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:57:01.284169   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:57:01.285951   12816 out.go:177] * Starting "ha-022600-m02" control-plane node in "ha-022600" cluster
	I0416 16:57:01.286952   12816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:57:01.286952   12816 cache.go:56] Caching tarball of preloaded images
	I0416 16:57:01.286952   12816 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 16:57:01.286952   12816 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 16:57:01.286952   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:57:01.296247   12816 start.go:360] acquireMachinesLock for ha-022600-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:57:01.297324   12816 start.go:364] duration metric: took 1.0773ms to acquireMachinesLock for "ha-022600-m02"
	I0416 16:57:01.297559   12816 start.go:93] Provisioning new machine with config: &{Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 16:57:01.297559   12816 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0416 16:57:01.297559   12816 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 16:57:01.297559   12816 start.go:159] libmachine.API.Create for "ha-022600" (driver="hyperv")
	I0416 16:57:01.297559   12816 client.go:168] LocalClient.Create starting
	I0416 16:57:01.298838   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0416 16:57:01.299147   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:57:01.299147   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:57:01.299293   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0416 16:57:01.299468   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:57:01.299468   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:57:01.299468   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0416 16:57:03.017072   12816 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0416 16:57:03.017279   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:03.017366   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0416 16:57:04.580895   12816 main.go:141] libmachine: [stdout =====>] : False
	
	I0416 16:57:04.580895   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:04.580895   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:57:05.984295   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:57:05.984295   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:05.984295   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:57:09.314760   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:57:09.314760   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:09.316740   12816 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 16:57:09.669552   12816 main.go:141] libmachine: Creating SSH key...
	I0416 16:57:10.010472   12816 main.go:141] libmachine: Creating VM...
	I0416 16:57:10.010472   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:57:12.690022   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:57:12.690022   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:12.690107   12816 main.go:141] libmachine: Using switch "Default Switch"
	I0416 16:57:12.690185   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:57:14.267157   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:57:14.267157   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:14.267157   12816 main.go:141] libmachine: Creating VHD
	I0416 16:57:14.267157   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0416 16:57:17.749511   12816 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : FE960248-03C1-43D6-B7AE-A60D4C86308B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0416 16:57:17.749511   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:17.749511   12816 main.go:141] libmachine: Writing magic tar header
	I0416 16:57:17.749511   12816 main.go:141] libmachine: Writing SSH key tar header
	I0416 16:57:17.758158   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0416 16:57:20.709379   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:20.709950   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:20.710019   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\disk.vhd' -SizeBytes 20000MB
	I0416 16:57:23.025729   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:23.025729   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:23.025729   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-022600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0416 16:57:26.131923   12816 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-022600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0416 16:57:26.131923   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:26.131923   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-022600-m02 -DynamicMemoryEnabled $false
	I0416 16:57:28.159153   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:28.159229   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:28.159409   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-022600-m02 -Count 2
	I0416 16:57:30.126033   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:30.126033   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:30.126033   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-022600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\boot2docker.iso'
	I0416 16:57:32.420739   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:32.421735   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:32.421878   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-022600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\disk.vhd'
	I0416 16:57:34.779822   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:34.779822   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:34.779822   12816 main.go:141] libmachine: Starting VM...
	I0416 16:57:34.780971   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-022600-m02
	I0416 16:57:37.369505   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:37.369687   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:37.369687   12816 main.go:141] libmachine: Waiting for host to start...
	I0416 16:57:37.369767   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:39.415029   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:39.415029   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:39.415286   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:41.685132   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:41.685132   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:42.700464   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:44.674039   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:44.674039   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:44.674039   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:46.993492   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:46.993492   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:48.000886   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:49.992438   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:49.992894   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:49.992930   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:52.274971   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:52.274971   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:53.290891   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:55.287716   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:55.287962   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:55.288037   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:57.564053   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:57.564053   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:58.572803   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:00.584542   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:00.584542   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:00.584542   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:02.905327   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:02.905327   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:02.905391   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:04.899133   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:04.899133   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:04.899479   12816 machine.go:94] provisionDockerMachine start ...
	I0416 16:58:04.899479   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:06.914221   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:06.914221   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:06.914869   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:09.273511   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:09.273546   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:09.277783   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:09.278406   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:09.278406   12816 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 16:58:09.413281   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 16:58:09.413281   12816 buildroot.go:166] provisioning hostname "ha-022600-m02"
	I0416 16:58:09.413281   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:11.438626   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:11.438626   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:11.439079   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:13.801181   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:13.801181   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:13.805295   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:13.805684   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:13.805684   12816 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-022600-m02 && echo "ha-022600-m02" | sudo tee /etc/hostname
	I0416 16:58:13.957933   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-022600-m02
	
	I0416 16:58:13.958021   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:15.863768   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:15.863768   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:15.863768   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:18.176996   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:18.178002   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:18.182057   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:18.182681   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:18.182681   12816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-022600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-022600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-022600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:58:18.315751   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:58:18.315853   12816 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 16:58:18.315853   12816 buildroot.go:174] setting up certificates
	I0416 16:58:18.315853   12816 provision.go:84] configureAuth start
	I0416 16:58:18.315853   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:20.243862   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:20.243928   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:20.243928   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:22.525833   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:22.525945   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:22.526057   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:24.418671   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:24.418671   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:24.418894   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:26.735560   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:26.735560   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:26.735560   12816 provision.go:143] copyHostCerts
	I0416 16:58:26.736546   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 16:58:26.736627   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 16:58:26.736627   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 16:58:26.736627   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 16:58:26.737290   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 16:58:26.737900   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 16:58:26.737900   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 16:58:26.738191   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 16:58:26.738908   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 16:58:26.738977   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 16:58:26.738977   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 16:58:26.738977   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 16:58:26.739652   12816 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-022600-m02 san=[127.0.0.1 172.19.80.125 ha-022600-m02 localhost minikube]
	I0416 16:58:26.917277   12816 provision.go:177] copyRemoteCerts
	I0416 16:58:26.926308   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:58:26.926600   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:28.829360   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:28.829360   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:28.830343   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:31.113681   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:31.113681   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:31.113681   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:58:31.229222   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3026703s)
	I0416 16:58:31.229222   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 16:58:31.229700   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 16:58:31.279666   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 16:58:31.280307   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0416 16:58:31.328101   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 16:58:31.328245   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 16:58:31.382563   12816 provision.go:87] duration metric: took 13.065969s to configureAuth
	I0416 16:58:31.382563   12816 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:58:31.383343   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:58:31.383343   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:33.331199   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:33.331199   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:33.331275   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:35.653673   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:35.653721   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:35.656855   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:35.657430   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:35.657430   12816 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 16:58:35.781565   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 16:58:35.781565   12816 buildroot.go:70] root file system type: tmpfs
	I0416 16:58:35.781565   12816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 16:58:35.782090   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:37.695478   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:37.695478   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:37.696344   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:39.956169   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:39.956169   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:39.961057   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:39.961515   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:39.961564   12816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.81.207"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 16:58:40.123664   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.81.207
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 16:58:40.123818   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:42.064878   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:42.064974   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:42.065152   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:44.326252   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:44.326252   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:44.330103   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:44.330731   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:44.330731   12816 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 16:58:46.283136   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 16:58:46.283253   12816 machine.go:97] duration metric: took 41.3814279s to provisionDockerMachine
	I0416 16:58:46.283253   12816 client.go:171] duration metric: took 1m44.9797412s to LocalClient.Create
	I0416 16:58:46.283253   12816 start.go:167] duration metric: took 1m44.9797412s to libmachine.API.Create "ha-022600"
	I0416 16:58:46.283253   12816 start.go:293] postStartSetup for "ha-022600-m02" (driver="hyperv")
	I0416 16:58:46.283345   12816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:58:46.292724   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:58:46.292724   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:48.207625   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:48.207625   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:48.207625   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:50.480821   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:50.480821   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:50.480821   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:58:50.575284   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2823171s)
	I0416 16:58:50.584260   12816 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:58:50.591292   12816 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:58:50.591292   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 16:58:50.591292   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 16:58:50.591900   12816 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 16:58:50.591900   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 16:58:50.601073   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 16:58:50.618807   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 16:58:50.671301   12816 start.go:296] duration metric: took 4.3877068s for postStartSetup
	I0416 16:58:50.673161   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:52.621684   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:52.622252   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:52.622252   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:54.923435   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:54.923435   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:54.923763   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:58:54.926483   12816 start.go:128] duration metric: took 1m53.622481s to createHost
	I0416 16:58:54.926657   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:56.793105   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:56.793105   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:56.793184   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:59.024255   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:59.025184   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:59.029108   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:59.029633   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:59.029730   12816 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 16:58:59.149333   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713286739.315259098
	
	I0416 16:58:59.149333   12816 fix.go:216] guest clock: 1713286739.315259098
	I0416 16:58:59.149333   12816 fix.go:229] Guest: 2024-04-16 16:58:59.315259098 +0000 UTC Remote: 2024-04-16 16:58:54.9265716 +0000 UTC m=+304.925199701 (delta=4.388687498s)
	I0416 16:58:59.149333   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:01.054656   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:01.054831   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:01.054831   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:03.303195   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:03.303195   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:03.307071   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:59:03.307459   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:59:03.307531   12816 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713286739
	I0416 16:59:03.449024   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 16:58:59 UTC 2024
	
	I0416 16:59:03.449024   12816 fix.go:236] clock set: Tue Apr 16 16:58:59 UTC 2024
	 (err=<nil>)
	I0416 16:59:03.449024   12816 start.go:83] releasing machines lock for "ha-022600-m02", held for 2m2.1447745s
	I0416 16:59:03.450039   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:05.434998   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:05.434998   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:05.434998   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:07.737918   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:07.737918   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:07.739042   12816 out.go:177] * Found network options:
	I0416 16:59:07.739784   12816 out.go:177]   - NO_PROXY=172.19.81.207
	W0416 16:59:07.740404   12816 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 16:59:07.741027   12816 out.go:177]   - NO_PROXY=172.19.81.207
	W0416 16:59:07.741505   12816 proxy.go:119] fail to check proxy env: Error ip not in block
	W0416 16:59:07.742708   12816 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 16:59:07.744988   12816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:59:07.745153   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:07.752817   12816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0416 16:59:07.752817   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:09.758953   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:12.157582   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:12.158536   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:12.159044   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:59:12.184719   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:12.184719   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:12.185179   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:59:12.257436   12816 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5043642s)
	W0416 16:59:12.257436   12816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:59:12.266545   12816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:59:12.367206   12816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 16:59:12.367296   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:59:12.367330   12816 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6219642s)
	I0416 16:59:12.367330   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:59:12.423201   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 16:59:12.453988   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 16:59:12.472992   12816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 16:59:12.482991   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 16:59:12.510864   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:59:12.538866   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 16:59:12.565866   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:59:12.597751   12816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:59:12.622761   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 16:59:12.648905   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 16:59:12.674904   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 16:59:12.713452   12816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:59:12.741495   12816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:59:12.768497   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:59:12.975524   12816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 16:59:13.011635   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:59:13.023647   12816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 16:59:13.058146   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:59:13.091991   12816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 16:59:13.139058   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:59:13.173081   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:59:13.208242   12816 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 16:59:13.259511   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:59:13.282094   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:59:13.329081   12816 ssh_runner.go:195] Run: which cri-dockerd
	I0416 16:59:13.344832   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 16:59:13.362131   12816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 16:59:13.403377   12816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 16:59:13.597444   12816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 16:59:13.768147   12816 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 16:59:13.768278   12816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 16:59:13.808294   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:59:13.987216   12816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 17:00:15.104612   12816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1138396s)
	I0416 17:00:15.115049   12816 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0416 17:00:15.145752   12816 out.go:177] 
	W0416 17:00:15.146473   12816 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 16 16:58:45 ha-022600-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.076842920Z" level=info msg="Starting up"
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.077687177Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.078706068Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.109665355Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138411128Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138448735Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138508447Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138523049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138600164Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138632670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138848110Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138955930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139030244Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139045347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139142365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139433520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142495192Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142588309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142778845Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142795748Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143044695Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143174419Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143191422Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.152862930Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153144583Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153313214Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153337519Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153354522Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153467543Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153957434Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154159572Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154195179Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154212082Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154230586Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154258491Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154272393Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154287696Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154303599Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154317302Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154330504Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154344107Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154373612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154392516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154406618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154421121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154434024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154447526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154460128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154474031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154498536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154514539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154525841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154555046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154568249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154583952Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154604755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154629960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154642062Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154700973Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154916114Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155014532Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155030135Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155203567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155302486Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155325090Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155706861Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155796078Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155907599Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155947306Z" level=info msg="containerd successfully booted in 0.047582s"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.119001526Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.129323458Z" level=info msg="Loading containers: start."
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.358382320Z" level=info msg="Loading containers: done."
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.377033580Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.377149301Z" level=info msg="Daemon has completed initialization"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.447556885Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 16:58:46 ha-022600-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.449134569Z" level=info msg="API listen on [::]:2376"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.178053148Z" level=info msg="Processing signal 'terminated'"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.179830517Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 16:59:14 ha-022600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.180814055Z" level=info msg="Daemon shutdown complete"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.181020363Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.181054564Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 16:59:15 ha-022600-m02 dockerd[1019]: time="2024-04-16T16:59:15.248212596Z" level=info msg="Starting up"
	Apr 16 17:00:15 ha-022600-m02 dockerd[1019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0416 17:00:15.146611   12816 out.go:239] * 
	W0416 17:00:15.147806   12816 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 17:00:15.148383   12816 out.go:177] 
	
	
	==> Docker <==
	Apr 16 16:57:05 ha-022600 dockerd[1331]: time="2024-04-16T16:57:05.394895828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 16:57:05 ha-022600 dockerd[1331]: time="2024-04-16T16:57:05.395264947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 16:57:05 ha-022600 dockerd[1331]: time="2024-04-16T16:57:05.439364904Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 16:57:05 ha-022600 dockerd[1331]: time="2024-04-16T16:57:05.439538413Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 16:57:05 ha-022600 dockerd[1331]: time="2024-04-16T16:57:05.439609817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 16:57:05 ha-022600 dockerd[1331]: time="2024-04-16T16:57:05.439883431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 16:57:05 ha-022600 dockerd[1331]: time="2024-04-16T16:57:05.468531198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 16:57:05 ha-022600 dockerd[1331]: time="2024-04-16T16:57:05.469028223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 16:57:05 ha-022600 dockerd[1331]: time="2024-04-16T16:57:05.469149629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 16:57:05 ha-022600 dockerd[1331]: time="2024-04-16T16:57:05.469403342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 16:57:05 ha-022600 cri-dockerd[1232]: time="2024-04-16T16:57:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bf991c3e34e2d715e0a0f401242ddb3db2484931261f08db3ddc84b4060deee2/resolv.conf as [nameserver 172.19.80.1]"
	Apr 16 16:57:05 ha-022600 cri-dockerd[1232]: time="2024-04-16T16:57:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4ad38b0d59335f4feeddcbaba7498e6f55dffc099f85df14dd46f0ef8c9d9f44/resolv.conf as [nameserver 172.19.80.1]"
	Apr 16 16:57:05 ha-022600 cri-dockerd[1232]: time="2024-04-16T16:57:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/093278b3840efaa9102292efc824f4da47128fe771974b308b446bddc92c5910/resolv.conf as [nameserver 172.19.80.1]"
	Apr 16 16:57:05 ha-022600 dockerd[1331]: time="2024-04-16T16:57:05.819802544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 16:57:05 ha-022600 dockerd[1331]: time="2024-04-16T16:57:05.819862547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 16:57:05 ha-022600 dockerd[1331]: time="2024-04-16T16:57:05.819875848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 16:57:05 ha-022600 dockerd[1331]: time="2024-04-16T16:57:05.819967753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 16:57:05 ha-022600 dockerd[1331]: time="2024-04-16T16:57:05.920688039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 16:57:05 ha-022600 dockerd[1331]: time="2024-04-16T16:57:05.920896649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 16:57:05 ha-022600 dockerd[1331]: time="2024-04-16T16:57:05.920921751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 16:57:05 ha-022600 dockerd[1331]: time="2024-04-16T16:57:05.921347872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 16:57:06 ha-022600 dockerd[1331]: time="2024-04-16T16:57:06.015075707Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 16:57:06 ha-022600 dockerd[1331]: time="2024-04-16T16:57:06.015236816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 16:57:06 ha-022600 dockerd[1331]: time="2024-04-16T16:57:06.015340122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 16:57:06 ha-022600 dockerd[1331]: time="2024-04-16T16:57:06.015511532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3fe545bfad4e6       cbb01a7bd410d                                                                                       3 minutes ago       Running             coredns                   0                   093278b3840ef       coredns-76f75df574-qm89x
	979dee88be2b4       cbb01a7bd410d                                                                                       3 minutes ago       Running             coredns                   0                   4ad38b0d59335       coredns-76f75df574-ww2r6
	257879ecf06b2       6e38f40d628db                                                                                       3 minutes ago       Running             storage-provisioner       0                   bf991c3e34e2d       storage-provisioner
	be245de9ef545       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988            3 minutes ago       Running             kindnet-cni               0                   92c35b3fd0967       kindnet-mwqvl
	05db92f49e0df       a1d263b5dc5b0                                                                                       3 minutes ago       Running             kube-proxy                0                   12380f49c1509       kube-proxy-2vddt
	d1ba82cd26254       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016   3 minutes ago       Running             kube-vip                  0                   fa2c75c4c8d59       kube-vip-ha-022600
	a7fb69539df62       6052a25da3f97                                                                                       4 minutes ago       Running             kube-controller-manager   0                   b536621e20d4b       kube-controller-manager-ha-022600
	4fd5df8c9fd37       39f995c9f1996                                                                                       4 minutes ago       Running             kube-apiserver            0                   5a7a1e80caeb4       kube-apiserver-ha-022600
	e042d71e8b0e8       8c390d98f50c0                                                                                       4 minutes ago       Running             kube-scheduler            0                   5a2551c91a1b6       kube-scheduler-ha-022600
	c29b0762ff0bf       3861cfcd7c04c                                                                                       4 minutes ago       Running             etcd                      0                   c8a9aa3126cf5       etcd-ha-022600
	
	
	==> coredns [3fe545bfad4e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = db872c9fdf31f8d8ff61123f2a1e38a38b951fa043b9e42cdb76f01d23889e560885a7bdef735e757fd28e65f13e44b1d5d7b5def31861f6a98cd0279fbc18c8
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:55981 - 4765 "HINFO IN 3735046377920793891.8143170502200932773. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.058936595s
	
	
	==> coredns [979dee88be2b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = db872c9fdf31f8d8ff61123f2a1e38a38b951fa043b9e42cdb76f01d23889e560885a7bdef735e757fd28e65f13e44b1d5d7b5def31861f6a98cd0279fbc18c8
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50127 - 24072 "HINFO IN 7665836187497317301.2285362183679153792. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027543487s
	
	
	==> describe nodes <==
	Name:               ha-022600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-022600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=ha-022600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T16_56_39_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 16:56:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-022600
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 17:00:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 16:57:09 +0000   Tue, 16 Apr 2024 16:56:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 16:57:09 +0000   Tue, 16 Apr 2024 16:56:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 16:57:09 +0000   Tue, 16 Apr 2024 16:56:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 16:57:09 +0000   Tue, 16 Apr 2024 16:57:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.81.207
	  Hostname:    ha-022600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b4674338fa494bbcb2e21e2b4385c5e1
	  System UUID:                201025fc-0c03-cc49-a194-29d6500971a2
	  Boot ID:                    6ae5bedd-6e8e-4f58-b08c-8e9912fd04de
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-qm89x             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     3m41s
	  kube-system                 coredns-76f75df574-ww2r6             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     3m41s
	  kube-system                 etcd-ha-022600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m54s
	  kube-system                 kindnet-mwqvl                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m42s
	  kube-system                 kube-apiserver-ha-022600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 kube-controller-manager-ha-022600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 kube-proxy-2vddt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 kube-scheduler-ha-022600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 kube-vip-ha-022600                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m38s  kube-proxy       
	  Normal  Starting                 3m55s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m54s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m54s  kubelet          Node ha-022600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m54s  kubelet          Node ha-022600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m54s  kubelet          Node ha-022600 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m43s  node-controller  Node ha-022600 event: Registered Node ha-022600 in Controller
	  Normal  NodeReady                3m29s  kubelet          Node ha-022600 status is now: NodeReady
	
	
	==> dmesg <==
	[  +1.308229] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.279563] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.656516] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr16 16:55] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.165290] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[Apr16 16:56] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.091843] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.493988] systemd-fstab-generator[987]: Ignoring "noauto" option for root device
	[  +0.172637] systemd-fstab-generator[999]: Ignoring "noauto" option for root device
	[  +0.230010] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	[  +2.695048] systemd-fstab-generator[1185]: Ignoring "noauto" option for root device
	[  +0.219400] systemd-fstab-generator[1197]: Ignoring "noauto" option for root device
	[  +0.196554] systemd-fstab-generator[1209]: Ignoring "noauto" option for root device
	[  +0.267217] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
	[ +11.053282] systemd-fstab-generator[1317]: Ignoring "noauto" option for root device
	[  +0.095458] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.012264] systemd-fstab-generator[1521]: Ignoring "noauto" option for root device
	[  +6.758798] systemd-fstab-generator[1720]: Ignoring "noauto" option for root device
	[  +0.093227] kauditd_printk_skb: 73 callbacks suppressed
	[  +7.850543] systemd-fstab-generator[2213]: Ignoring "noauto" option for root device
	[  +0.130310] kauditd_printk_skb: 72 callbacks suppressed
	[ +15.381320] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.386371] kauditd_printk_skb: 29 callbacks suppressed
	
	
	==> etcd [c29b0762ff0b] <==
	{"level":"info","ts":"2024-04-16T16:56:32.683077Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"15aeb555476ef740","local-member-id":"6fac5e7781389861","added-peer-id":"6fac5e7781389861","added-peer-peer-urls":["https://172.19.81.207:2380"]}
	{"level":"info","ts":"2024-04-16T16:56:32.723516Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-16T16:56:32.723756Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"6fac5e7781389861","initial-advertise-peer-urls":["https://172.19.81.207:2380"],"listen-peer-urls":["https://172.19.81.207:2380"],"advertise-client-urls":["https://172.19.81.207:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.81.207:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-16T16:56:32.723814Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-16T16:56:32.723905Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.19.81.207:2380"}
	{"level":"info","ts":"2024-04-16T16:56:32.723914Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.19.81.207:2380"}
	{"level":"info","ts":"2024-04-16T16:56:33.030826Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fac5e7781389861 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-16T16:56:33.031001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fac5e7781389861 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-16T16:56:33.031092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fac5e7781389861 received MsgPreVoteResp from 6fac5e7781389861 at term 1"}
	{"level":"info","ts":"2024-04-16T16:56:33.031194Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fac5e7781389861 became candidate at term 2"}
	{"level":"info","ts":"2024-04-16T16:56:33.031296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fac5e7781389861 received MsgVoteResp from 6fac5e7781389861 at term 2"}
	{"level":"info","ts":"2024-04-16T16:56:33.031401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fac5e7781389861 became leader at term 2"}
	{"level":"info","ts":"2024-04-16T16:56:33.031482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6fac5e7781389861 elected leader 6fac5e7781389861 at term 2"}
	{"level":"info","ts":"2024-04-16T16:56:33.035895Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T16:56:33.039506Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"15aeb555476ef740","local-member-id":"6fac5e7781389861","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T16:56:33.039923Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T16:56:33.041955Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T16:56:33.042053Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"6fac5e7781389861","local-member-attributes":"{Name:ha-022600 ClientURLs:[https://172.19.81.207:2379]}","request-path":"/0/members/6fac5e7781389861/attributes","cluster-id":"15aeb555476ef740","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-16T16:56:33.042276Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T16:56:33.044336Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.81.207:2379"}
	{"level":"info","ts":"2024-04-16T16:56:33.052851Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T16:56:33.052893Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-16T16:56:33.055802Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-16T16:56:33.063928Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-16T16:57:01.41567Z","caller":"traceutil/trace.go:171","msg":"trace[1184878888] transaction","detail":"{read_only:false; response_revision:415; number_of_response:1; }","duration":"279.327005ms","start":"2024-04-16T16:57:01.136324Z","end":"2024-04-16T16:57:01.415651Z","steps":["trace[1184878888] 'process raft request'  (duration: 279.236301ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:00:33 up 5 min,  0 users,  load average: 0.36, 0.34, 0.16
	Linux ha-022600 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [be245de9ef54] <==
	I0416 16:58:30.401879       1 main.go:227] handling current node
	I0416 16:58:40.416917       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 16:58:40.417133       1 main.go:227] handling current node
	I0416 16:58:50.422317       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 16:58:50.422353       1 main.go:227] handling current node
	I0416 16:59:00.427252       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 16:59:00.427350       1 main.go:227] handling current node
	I0416 16:59:10.441636       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 16:59:10.441748       1 main.go:227] handling current node
	I0416 16:59:20.455284       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 16:59:20.455314       1 main.go:227] handling current node
	I0416 16:59:30.467853       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 16:59:30.468294       1 main.go:227] handling current node
	I0416 16:59:40.481202       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 16:59:40.481428       1 main.go:227] handling current node
	I0416 16:59:50.486418       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 16:59:50.486526       1 main.go:227] handling current node
	I0416 17:00:00.496966       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:00:00.497058       1 main.go:227] handling current node
	I0416 17:00:10.510108       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:00:10.510222       1 main.go:227] handling current node
	I0416 17:00:20.514735       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:00:20.515239       1 main.go:227] handling current node
	I0416 17:00:30.528207       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:00:30.528332       1 main.go:227] handling current node
	
	
	==> kube-apiserver [4fd5df8c9fd3] <==
	I0416 16:56:35.504221       1 shared_informer.go:318] Caches are synced for configmaps
	I0416 16:56:35.510308       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0416 16:56:35.512679       1 aggregator.go:165] initial CRD sync complete...
	I0416 16:56:35.516211       1 autoregister_controller.go:141] Starting autoregister controller
	I0416 16:56:35.516249       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0416 16:56:35.516256       1 cache.go:39] Caches are synced for autoregister controller
	I0416 16:56:35.517473       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0416 16:56:35.522352       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0416 16:56:35.529558       1 controller.go:624] quota admission added evaluator for: namespaces
	I0416 16:56:35.542494       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0416 16:56:36.411016       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0416 16:56:36.418409       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0416 16:56:36.419376       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0416 16:56:37.172553       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0416 16:56:37.235069       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0416 16:56:37.370838       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0416 16:56:37.381797       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.81.207]
	I0416 16:56:37.383264       1 controller.go:624] quota admission added evaluator for: endpoints
	I0416 16:56:37.388718       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0416 16:56:37.435733       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0416 16:56:38.737496       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0416 16:56:38.764389       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0416 16:56:38.781093       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0416 16:56:51.466047       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0416 16:56:51.868826       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [a7fb69539df6] <==
	I0416 16:56:51.070432       1 shared_informer.go:318] Caches are synced for resource quota
	I0416 16:56:51.073735       1 shared_informer.go:318] Caches are synced for resource quota
	I0416 16:56:51.115480       1 shared_informer.go:318] Caches are synced for cronjob
	I0416 16:56:51.473571       1 event.go:376] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-76f75df574 to 2"
	I0416 16:56:51.518819       1 shared_informer.go:318] Caches are synced for garbage collector
	I0416 16:56:51.560011       1 shared_informer.go:318] Caches are synced for garbage collector
	I0416 16:56:51.560222       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0416 16:56:51.889138       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-mwqvl"
	I0416 16:56:51.892462       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-2vddt"
	I0416 16:56:52.075620       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-ww2r6"
	I0416 16:56:52.089701       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-qm89x"
	I0416 16:56:52.106572       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="633.961814ms"
	I0416 16:56:52.122316       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="15.686904ms"
	I0416 16:56:52.190000       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="67.640369ms"
	I0416 16:56:52.190122       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="63.603µs"
	I0416 16:57:04.964104       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="174.809µs"
	I0416 16:57:04.979092       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="587.33µs"
	I0416 16:57:04.995404       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="188.309µs"
	I0416 16:57:05.057328       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="96.005µs"
	I0416 16:57:05.964586       1 node_lifecycle_controller.go:1045] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0416 16:57:07.181900       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="167.009µs"
	I0416 16:57:07.224163       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="15.307781ms"
	I0416 16:57:07.224903       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="88.905µs"
	I0416 16:57:07.277301       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="19.898845ms"
	I0416 16:57:07.277810       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="53.303µs"
	
	
	==> kube-proxy [05db92f49e0d] <==
	I0416 16:56:54.468581       1 server_others.go:72] "Using iptables proxy"
	I0416 16:56:54.505964       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.81.207"]
	I0416 16:56:54.583838       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 16:56:54.584172       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 16:56:54.584273       1 server_others.go:168] "Using iptables Proxier"
	I0416 16:56:54.590060       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 16:56:54.590806       1 server.go:865] "Version info" version="v1.29.3"
	I0416 16:56:54.591014       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 16:56:54.592331       1 config.go:188] "Starting service config controller"
	I0416 16:56:54.592517       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 16:56:54.592625       1 config.go:97] "Starting endpoint slice config controller"
	I0416 16:56:54.592689       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 16:56:54.594058       1 config.go:315] "Starting node config controller"
	I0416 16:56:54.594215       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 16:56:54.693900       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 16:56:54.693964       1 shared_informer.go:318] Caches are synced for service config
	I0416 16:56:54.694328       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [e042d71e8b0e] <==
	W0416 16:56:36.501819       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0416 16:56:36.501922       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0416 16:56:36.507709       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0416 16:56:36.507948       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0416 16:56:36.573671       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 16:56:36.573877       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 16:56:36.602162       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 16:56:36.602205       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 16:56:36.621966       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.622272       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0416 16:56:36.648392       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0416 16:56:36.648623       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0416 16:56:36.694872       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 16:56:36.694970       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 16:56:36.804118       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.805424       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 16:56:36.821863       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.822231       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 16:56:36.866017       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 16:56:36.866298       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0416 16:56:36.904820       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 16:56:36.905097       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 16:56:36.917996       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.918036       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0416 16:56:39.298679       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 16:57:05 ha-022600 kubelet[2220]: I0416 16:57:05.028253    2220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/26174668-156f-4c4e-af02-95850d7b8e5e-tmp\") pod \"storage-provisioner\" (UID: \"26174668-156f-4c4e-af02-95850d7b8e5e\") " pod="kube-system/storage-provisioner"
	Apr 16 16:57:05 ha-022600 kubelet[2220]: I0416 16:57:05.028299    2220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrxqz\" (UniqueName: \"kubernetes.io/projected/26174668-156f-4c4e-af02-95850d7b8e5e-kube-api-access-jrxqz\") pod \"storage-provisioner\" (UID: \"26174668-156f-4c4e-af02-95850d7b8e5e\") " pod="kube-system/storage-provisioner"
	Apr 16 16:57:05 ha-022600 kubelet[2220]: I0416 16:57:05.028338    2220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bd76d3c-e995-4eee-984d-dda4f6cceb45-config-volume\") pod \"coredns-76f75df574-qm89x\" (UID: \"3bd76d3c-e995-4eee-984d-dda4f6cceb45\") " pod="kube-system/coredns-76f75df574-qm89x"
	Apr 16 16:57:05 ha-022600 kubelet[2220]: I0416 16:57:05.028369    2220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwts7\" (UniqueName: \"kubernetes.io/projected/737c5852-9ad2-4c33-a032-de88deddadbc-kube-api-access-nwts7\") pod \"coredns-76f75df574-ww2r6\" (UID: \"737c5852-9ad2-4c33-a032-de88deddadbc\") " pod="kube-system/coredns-76f75df574-ww2r6"
	Apr 16 16:57:05 ha-022600 kubelet[2220]: I0416 16:57:05.028400    2220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrtvl\" (UniqueName: \"kubernetes.io/projected/3bd76d3c-e995-4eee-984d-dda4f6cceb45-kube-api-access-jrtvl\") pod \"coredns-76f75df574-qm89x\" (UID: \"3bd76d3c-e995-4eee-984d-dda4f6cceb45\") " pod="kube-system/coredns-76f75df574-qm89x"
	Apr 16 16:57:05 ha-022600 kubelet[2220]: I0416 16:57:05.715973    2220 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ad38b0d59335f4feeddcbaba7498e6f55dffc099f85df14dd46f0ef8c9d9f44"
	Apr 16 16:57:05 ha-022600 kubelet[2220]: I0416 16:57:05.934867    2220 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf991c3e34e2d715e0a0f401242ddb3db2484931261f08db3ddc84b4060deee2"
	Apr 16 16:57:06 ha-022600 kubelet[2220]: I0416 16:57:06.136649    2220 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="093278b3840efaa9102292efc824f4da47128fe771974b308b446bddc92c5910"
	Apr 16 16:57:07 ha-022600 kubelet[2220]: I0416 16:57:07.208446    2220 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-ww2r6" podStartSLOduration=15.208403535 podStartE2EDuration="15.208403535s" podCreationTimestamp="2024-04-16 16:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-16 16:57:07.17841611 +0000 UTC m=+28.478158164" watchObservedRunningTime="2024-04-16 16:57:07.208403535 +0000 UTC m=+28.508145489"
	Apr 16 16:57:07 ha-022600 kubelet[2220]: I0416 16:57:07.247828    2220 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=8.247747099 podStartE2EDuration="8.247747099s" podCreationTimestamp="2024-04-16 16:56:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-16 16:57:07.232332812 +0000 UTC m=+28.532074866" watchObservedRunningTime="2024-04-16 16:57:07.247747099 +0000 UTC m=+28.547489153"
	Apr 16 16:57:38 ha-022600 kubelet[2220]: E0416 16:57:38.995263    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 16:57:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 16:57:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 16:57:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 16:57:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 16:58:38 ha-022600 kubelet[2220]: E0416 16:58:38.995568    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 16:58:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 16:58:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 16:58:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 16:58:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 16:59:38 ha-022600 kubelet[2220]: E0416 16:59:38.999024    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 16:59:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 16:59:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 16:59:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 16:59:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [257879ecf06b] <==
	I0416 16:57:06.255455       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0416 16:57:06.278834       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0416 16:57:06.280824       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0416 16:57:06.296912       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0416 16:57:06.297990       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-022600_72f60c68-7530-4649-9313-75b903c805c4!
	I0416 16:57:06.297754       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e843972e-8d36-423a-bd47-42ea404826e6", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-022600_72f60c68-7530-4649-9313-75b903c805c4 became leader
	I0416 16:57:06.399840       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-022600_72f60c68-7530-4649-9313-75b903c805c4!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 17:00:26.441958    3336 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-022600 -n ha-022600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-022600 -n ha-022600: (10.7178992s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-022600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StartCluster (415.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (752.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-022600 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-022600 -- rollout status deployment/busybox
E0416 17:01:06.818113    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
E0416 17:01:34.656256    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
E0416 17:06:06.832966    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
ha_test.go:133: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-022600 -- rollout status deployment/busybox: exit status 1 (10m3.2773063s)

                                                
                                                
-- stdout --
	Waiting for deployment "busybox" rollout to finish: 0 of 3 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 1 of 3 updated replicas are available...

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 17:00:45.658180   11788 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: deployment "busybox" exceeded its progress deadline

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-022600 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0416 17:10:48.968689    4768 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-022600 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0416 17:10:50.798056    6628 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-022600 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0416 17:10:51.919928    3800 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-022600 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0416 17:10:53.708702   14264 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-022600 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0416 17:10:55.926356   10108 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-022600 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0416 17:11:01.736429    8932 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
E0416 17:11:06.848143    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-022600 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0416 17:11:10.665291    1172 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-022600 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0416 17:11:25.389498   13168 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-022600 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0416 17:11:46.013380    7004 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-022600 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0416 17:12:01.772384    2160 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
E0416 17:12:30.059881    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-022600 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0416 17:12:42.124307    8152 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:159: failed to resolve pod IPs: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0416 17:12:42.124307    8152 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-022600 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-022600 -- exec busybox-7fdf7869d9-gph6r -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-022600 -- exec busybox-7fdf7869d9-gph6r -- nslookup kubernetes.io: exit status 1 (349.4019ms)

                                                
                                                
** stderr ** 
	W0416 17:12:42.815001   10884 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-7fdf7869d9-gph6r does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:173: Pod busybox-7fdf7869d9-gph6r could not resolve 'kubernetes.io': exit status 1
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-022600 -- exec busybox-7fdf7869d9-mnl84 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-022600 -- exec busybox-7fdf7869d9-mnl84 -- nslookup kubernetes.io: exit status 1 (356.7135ms)

                                                
                                                
** stderr ** 
	W0416 17:12:43.180212    9016 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-7fdf7869d9-mnl84 does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:173: Pod busybox-7fdf7869d9-mnl84 could not resolve 'kubernetes.io': exit status 1
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-022600 -- exec busybox-7fdf7869d9-rpfpf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-022600 -- exec busybox-7fdf7869d9-rpfpf -- nslookup kubernetes.io: (1.5851711s)
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-022600 -- exec busybox-7fdf7869d9-gph6r -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-022600 -- exec busybox-7fdf7869d9-gph6r -- nslookup kubernetes.default: exit status 1 (340.202ms)

                                                
                                                
** stderr ** 
	W0416 17:12:45.103652    8412 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-7fdf7869d9-gph6r does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:183: Pod busybox-7fdf7869d9-gph6r could not resolve 'kubernetes.default': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-022600 -- exec busybox-7fdf7869d9-mnl84 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-022600 -- exec busybox-7fdf7869d9-mnl84 -- nslookup kubernetes.default: exit status 1 (338.2989ms)

                                                
                                                
** stderr ** 
	W0416 17:12:45.450207   14136 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-7fdf7869d9-mnl84 does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:183: Pod busybox-7fdf7869d9-mnl84 could not resolve 'kubernetes.default': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-022600 -- exec busybox-7fdf7869d9-rpfpf -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-022600 -- exec busybox-7fdf7869d9-gph6r -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-022600 -- exec busybox-7fdf7869d9-gph6r -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (353.2642ms)

                                                
                                                
** stderr ** 
	W0416 17:12:46.287557    1832 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-7fdf7869d9-gph6r does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:191: Pod busybox-7fdf7869d9-gph6r could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-022600 -- exec busybox-7fdf7869d9-mnl84 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-022600 -- exec busybox-7fdf7869d9-mnl84 -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (347.9344ms)

                                                
                                                
** stderr ** 
	W0416 17:12:46.640598   13636 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-7fdf7869d9-mnl84 does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:191: Pod busybox-7fdf7869d9-mnl84 could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-022600 -- exec busybox-7fdf7869d9-rpfpf -- nslookup kubernetes.default.svc.cluster.local
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-022600 -n ha-022600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-022600 -n ha-022600: (11.0985878s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-022600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-022600 logs -n 25: (7.3918497s)
helpers_test.go:252: TestMultiControlPlane/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	| delete  | -p functional-538700                 | functional-538700 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:52 UTC | 16 Apr 24 16:53 UTC |
	| start   | -p ha-022600 --wait=true             | ha-022600         | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:53 UTC |                     |
	|         | --memory=2200 --ha                   |                   |                   |                |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |                |                     |                     |
	|         | --driver=hyperv                      |                   |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- apply -f             | ha-022600         | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:00 UTC | 16 Apr 24 17:00 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- rollout status       | ha-022600         | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:00 UTC |                     |
	|         | deployment/busybox                   |                   |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600         | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:10 UTC | 16 Apr 24 17:10 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600         | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:10 UTC | 16 Apr 24 17:10 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600         | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:10 UTC | 16 Apr 24 17:10 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600         | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:10 UTC | 16 Apr 24 17:10 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600         | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:10 UTC | 16 Apr 24 17:10 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600         | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:11 UTC | 16 Apr 24 17:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600         | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:11 UTC | 16 Apr 24 17:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600         | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:11 UTC | 16 Apr 24 17:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600         | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:11 UTC | 16 Apr 24 17:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600         | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600         | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600         | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600         | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-gph6r --          |                   |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600         | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-mnl84 --          |                   |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600         | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | busybox-7fdf7869d9-rpfpf --          |                   |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600         | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-gph6r --          |                   |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600         | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-mnl84 --          |                   |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600         | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | busybox-7fdf7869d9-rpfpf --          |                   |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600         | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-gph6r -- nslookup |                   |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600         | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-mnl84 -- nslookup |                   |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600         | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | busybox-7fdf7869d9-rpfpf -- nslookup |                   |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |                |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 16:53:50
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 16:53:50.116950   12816 out.go:291] Setting OutFile to fd 784 ...
	I0416 16:53:50.117952   12816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:53:50.117952   12816 out.go:304] Setting ErrFile to fd 696...
	I0416 16:53:50.117952   12816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:53:50.138920   12816 out.go:298] Setting JSON to false
	I0416 16:53:50.141501   12816 start.go:129] hostinfo: {"hostname":"minikube5","uptime":24059,"bootTime":1713262370,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0416 16:53:50.141501   12816 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0416 16:53:50.143700   12816 out.go:177] * [ha-022600] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0416 16:53:50.144387   12816 notify.go:220] Checking for updates...
	I0416 16:53:50.144982   12816 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:53:50.145881   12816 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 16:53:50.146373   12816 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0416 16:53:50.146987   12816 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 16:53:50.147788   12816 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 16:53:50.149250   12816 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 16:53:54.959514   12816 out.go:177] * Using the hyperv driver based on user configuration
	I0416 16:53:54.959811   12816 start.go:297] selected driver: hyperv
	I0416 16:53:54.959811   12816 start.go:901] validating driver "hyperv" against <nil>
	I0416 16:53:54.959811   12816 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 16:53:55.002641   12816 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 16:53:55.003374   12816 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 16:53:55.003816   12816 cni.go:84] Creating CNI manager for ""
	I0416 16:53:55.003816   12816 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0416 16:53:55.003816   12816 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0416 16:53:55.003816   12816 start.go:340] cluster config:
	{Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:53:55.003816   12816 iso.go:125] acquiring lock: {Name:mka3f8eef32f5becd06d05d1d837c2a92a8fa70c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 16:53:55.005700   12816 out.go:177] * Starting "ha-022600" primary control-plane node in "ha-022600" cluster
	I0416 16:53:55.006053   12816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:53:55.006397   12816 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0416 16:53:55.006397   12816 cache.go:56] Caching tarball of preloaded images
	I0416 16:53:55.006539   12816 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 16:53:55.006809   12816 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 16:53:55.007075   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:53:55.007821   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json: {Name:mkc2f9747189bfa0db5ea21e93e1afafc0e89eb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:53:55.008149   12816 start.go:360] acquireMachinesLock for ha-022600: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:53:55.009151   12816 start.go:364] duration metric: took 1.0024ms to acquireMachinesLock for "ha-022600"
	I0416 16:53:55.009151   12816 start.go:93] Provisioning new machine with config: &{Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 16:53:55.009151   12816 start.go:125] createHost starting for "" (driver="hyperv")
	I0416 16:53:55.010175   12816 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 16:53:55.010397   12816 start.go:159] libmachine.API.Create for "ha-022600" (driver="hyperv")
	I0416 16:53:55.010397   12816 client.go:168] LocalClient.Create starting
	I0416 16:53:55.010740   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0416 16:53:55.011023   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:53:55.011023   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:53:55.011200   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0416 16:53:55.011403   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:53:55.011403   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:53:55.011541   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0416 16:53:56.852843   12816 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0416 16:53:56.852843   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:53:56.853713   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0416 16:53:58.346838   12816 main.go:141] libmachine: [stdout =====>] : False
	
	I0416 16:53:58.346838   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:53:58.347399   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:53:59.667129   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:53:59.667129   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:53:59.667644   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:54:02.789332   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:54:02.789332   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:02.791736   12816 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 16:54:03.131710   12816 main.go:141] libmachine: Creating SSH key...
	I0416 16:54:03.273248   12816 main.go:141] libmachine: Creating VM...
	I0416 16:54:03.273248   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:54:05.824835   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:54:05.824835   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:05.824937   12816 main.go:141] libmachine: Using switch "Default Switch"
	I0416 16:54:05.825022   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:54:07.398351   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:54:07.398635   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:07.398635   12816 main.go:141] libmachine: Creating VHD
	I0416 16:54:07.398733   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0416 16:54:10.982944   12816 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : E9EB5342-E929-43B6-8B97-D7BDD354CEE1
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0416 16:54:10.983213   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:10.983213   12816 main.go:141] libmachine: Writing magic tar header
	I0416 16:54:10.983213   12816 main.go:141] libmachine: Writing SSH key tar header
	I0416 16:54:10.992883   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0416 16:54:13.950584   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:13.950584   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:13.950584   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\disk.vhd' -SizeBytes 20000MB
	I0416 16:54:16.287736   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:16.287736   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:16.287736   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-022600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0416 16:54:19.439740   12816 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-022600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0416 16:54:19.439740   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:19.439740   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-022600 -DynamicMemoryEnabled $false
	I0416 16:54:21.396684   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:21.396684   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:21.397696   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-022600 -Count 2
	I0416 16:54:23.301369   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:23.301369   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:23.302296   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-022600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\boot2docker.iso'
	I0416 16:54:25.540957   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:25.540957   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:25.541060   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-022600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\disk.vhd'
	I0416 16:54:27.919093   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:27.919302   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:27.919302   12816 main.go:141] libmachine: Starting VM...
	I0416 16:54:27.919462   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-022600
	I0416 16:54:30.480037   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:30.480279   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:30.480279   12816 main.go:141] libmachine: Waiting for host to start...
	I0416 16:54:30.480279   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:32.483346   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:32.484152   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:32.484152   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:34.785082   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:34.785271   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:35.799483   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:37.788691   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:37.788691   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:37.788898   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:40.058231   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:40.058742   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:41.064074   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:43.063862   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:43.063862   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:43.063862   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:45.301253   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:45.301420   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:46.309647   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:48.337653   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:48.337653   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:48.337653   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:50.614494   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:50.615195   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:51.620909   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:53.639317   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:53.640351   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:53.640405   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:55.942630   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:54:55.943393   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:55.943471   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:57.836545   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:57.836545   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:57.837395   12816 machine.go:94] provisionDockerMachine start ...
	I0416 16:54:57.837474   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:59.762683   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:59.763360   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:59.763440   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:02.003751   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:02.003751   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:02.010689   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:02.023158   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:02.023158   12816 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 16:55:02.152140   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 16:55:02.152244   12816 buildroot.go:166] provisioning hostname "ha-022600"
	I0416 16:55:02.152322   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:03.956913   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:03.956913   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:03.957618   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:06.305236   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:06.305236   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:06.309822   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:06.310484   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:06.310484   12816 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-022600 && echo "ha-022600" | sudo tee /etc/hostname
	I0416 16:55:06.479074   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-022600
	
	I0416 16:55:06.479182   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:08.433073   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:08.433999   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:08.433999   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:10.792893   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:10.792893   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:10.796713   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:10.797321   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:10.797321   12816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-022600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-022600/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-022600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:55:10.944702   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:55:10.944870   12816 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 16:55:10.944983   12816 buildroot.go:174] setting up certificates
	I0416 16:55:10.944983   12816 provision.go:84] configureAuth start
	I0416 16:55:10.945092   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:12.932736   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:12.932736   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:12.933614   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:15.203758   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:15.203758   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:15.203758   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:17.088226   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:17.088226   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:17.088334   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:19.325791   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:19.326294   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:19.326294   12816 provision.go:143] copyHostCerts
	I0416 16:55:19.326294   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 16:55:19.326294   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 16:55:19.326294   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 16:55:19.326900   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 16:55:19.328097   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 16:55:19.328097   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 16:55:19.328097   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 16:55:19.328097   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 16:55:19.329417   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 16:55:19.329417   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 16:55:19.329417   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 16:55:19.330063   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 16:55:19.330726   12816 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-022600 san=[127.0.0.1 172.19.81.207 ha-022600 localhost minikube]
	I0416 16:55:19.539117   12816 provision.go:177] copyRemoteCerts
	I0416 16:55:19.547114   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:55:19.547114   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:21.440985   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:21.440985   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:21.440985   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:23.726564   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:23.726564   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:23.727019   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:55:23.834423   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.287066s)
	I0416 16:55:23.834577   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 16:55:23.835008   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 16:55:23.874966   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 16:55:23.875470   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0416 16:55:23.923921   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 16:55:23.923921   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 16:55:23.965042   12816 provision.go:87] duration metric: took 13.0192422s to configureAuth
	I0416 16:55:23.965042   12816 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:55:23.965741   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:55:23.965827   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:25.905339   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:25.905903   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:25.905986   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:28.170079   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:28.170419   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:28.173356   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:28.173937   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:28.173937   12816 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 16:55:28.301727   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 16:55:28.301727   12816 buildroot.go:70] root file system type: tmpfs
	I0416 16:55:28.302425   12816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 16:55:28.302506   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:30.181808   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:30.181808   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:30.181889   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:32.394860   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:32.394860   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:32.398667   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:32.399299   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:32.399475   12816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 16:55:32.556658   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 16:55:32.556887   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:34.446928   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:34.446969   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:34.447053   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:36.709442   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:36.710242   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:36.714111   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:36.714437   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:36.714437   12816 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 16:55:38.655929   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 16:55:38.655929   12816 machine.go:97] duration metric: took 40.8162201s to provisionDockerMachine
	I0416 16:55:38.656036   12816 client.go:171] duration metric: took 1m43.6397622s to LocalClient.Create
	I0416 16:55:38.656036   12816 start.go:167] duration metric: took 1m43.6397622s to libmachine.API.Create "ha-022600"
	I0416 16:55:38.656036   12816 start.go:293] postStartSetup for "ha-022600" (driver="hyperv")
	I0416 16:55:38.656036   12816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:55:38.665072   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:55:38.665072   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:40.514910   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:40.514910   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:40.515910   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:42.764754   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:42.765404   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:42.765404   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:55:42.879399   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2140881s)
	I0416 16:55:42.892410   12816 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:55:42.899117   12816 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:55:42.899117   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 16:55:42.899734   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 16:55:42.901086   12816 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 16:55:42.901154   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 16:55:42.911237   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 16:55:42.927664   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 16:55:42.975440   12816 start.go:296] duration metric: took 4.3191592s for postStartSetup
	I0416 16:55:42.977201   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:44.830945   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:44.830945   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:44.831562   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:47.134349   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:47.134788   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:47.135000   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:55:47.137270   12816 start.go:128] duration metric: took 1m52.1217609s to createHost
	I0416 16:55:47.137270   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:49.024055   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:49.024055   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:49.024657   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:51.238446   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:51.238526   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:51.242455   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:51.243052   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:51.243052   12816 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 16:55:51.369469   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713286551.540248133
	
	I0416 16:55:51.369575   12816 fix.go:216] guest clock: 1713286551.540248133
	I0416 16:55:51.369575   12816 fix.go:229] Guest: 2024-04-16 16:55:51.540248133 +0000 UTC Remote: 2024-04-16 16:55:47.1372703 +0000 UTC m=+117.146546101 (delta=4.402977833s)
	I0416 16:55:51.369790   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:53.407581   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:53.407727   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:53.407727   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:55.663769   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:55.663769   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:55.667543   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:55.667688   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:55.667688   12816 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713286551
	I0416 16:55:55.810591   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 16:55:51 UTC 2024
	
	I0416 16:55:55.810700   12816 fix.go:236] clock set: Tue Apr 16 16:55:51 UTC 2024
	 (err=<nil>)
	I0416 16:55:55.810700   12816 start.go:83] releasing machines lock for "ha-022600", held for 2m0.7946995s
	I0416 16:55:55.810965   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:57.710878   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:57.710878   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:57.711672   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:59.985139   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:59.985210   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:59.988730   12816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:55:59.988803   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:59.998550   12816 ssh_runner.go:195] Run: cat /version.json
	I0416 16:55:59.998550   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:01.993954   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:01.993954   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:01.993954   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:01.995788   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:01.995959   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:01.996084   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:04.379274   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:56:04.379356   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:04.379701   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:56:04.391360   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:56:04.392161   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:04.392520   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:56:04.469159   12816 ssh_runner.go:235] Completed: cat /version.json: (4.4703555s)
	I0416 16:56:04.479363   12816 ssh_runner.go:195] Run: systemctl --version
	I0416 16:56:04.584079   12816 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5950892s)
	I0416 16:56:04.593130   12816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 16:56:04.602217   12816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:56:04.610705   12816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:56:04.639084   12816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 16:56:04.639119   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:56:04.639119   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:56:04.684127   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 16:56:04.713899   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 16:56:04.734297   12816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 16:56:04.745020   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 16:56:04.776657   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:56:04.806087   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 16:56:04.854166   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:56:04.890388   12816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:56:04.918140   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 16:56:04.946595   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 16:56:04.975408   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 16:56:05.001633   12816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:56:05.028505   12816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:56:05.053299   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:05.230466   12816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 16:56:05.260161   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:56:05.269988   12816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 16:56:05.302694   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:56:05.335619   12816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 16:56:05.368663   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:56:05.402792   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:56:05.435612   12816 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 16:56:05.483431   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:56:05.505797   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:56:05.548843   12816 ssh_runner.go:195] Run: which cri-dockerd
	I0416 16:56:05.563980   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 16:56:05.582552   12816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 16:56:05.624048   12816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 16:56:05.804495   12816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 16:56:05.984936   12816 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 16:56:05.985183   12816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 16:56:06.032244   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:06.217075   12816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 16:56:08.662995   12816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4457805s)
	I0416 16:56:08.670977   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0416 16:56:08.701542   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 16:56:08.730698   12816 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0416 16:56:08.941813   12816 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0416 16:56:09.145939   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:09.331138   12816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0416 16:56:09.370232   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 16:56:09.409657   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:09.615575   12816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0416 16:56:09.726879   12816 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0416 16:56:09.737760   12816 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0416 16:56:09.746450   12816 start.go:562] Will wait 60s for crictl version
	I0416 16:56:09.755840   12816 ssh_runner.go:195] Run: which crictl
	I0416 16:56:09.771470   12816 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 16:56:09.827603   12816 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0416 16:56:09.836477   12816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 16:56:09.874967   12816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 16:56:09.907967   12816 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
	I0416 16:56:09.908249   12816 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:6f:a4 Flags:up|broadcast|multicast|running}
	I0416 16:56:09.917049   12816 ip.go:210] interface addr: fe80::6b96:eca7:5afa:def5/64
	I0416 16:56:09.917049   12816 ip.go:210] interface addr: 172.19.80.1/20
	I0416 16:56:09.924842   12816 ssh_runner.go:195] Run: grep 172.19.80.1	host.minikube.internal$ /etc/hosts
	I0416 16:56:09.931830   12816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:56:09.968931   12816 kubeadm.go:877] updating cluster {Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 16:56:09.968931   12816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:56:09.975955   12816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 16:56:09.996899   12816 docker.go:685] Got preloaded images: 
	I0416 16:56:09.996899   12816 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0416 16:56:10.008276   12816 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0416 16:56:10.035609   12816 ssh_runner.go:195] Run: which lz4
	I0416 16:56:10.042582   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0416 16:56:10.050849   12816 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 16:56:10.058074   12816 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 16:56:10.058074   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0416 16:56:11.721910   12816 docker.go:649] duration metric: took 1.6789563s to copy over tarball
	I0416 16:56:11.731181   12816 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 16:56:20.333529   12816 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.60186s)
	I0416 16:56:20.333529   12816 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 16:56:20.400516   12816 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0416 16:56:20.419486   12816 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0416 16:56:20.469018   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:20.655543   12816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 16:56:23.229259   12816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5734984s)
	I0416 16:56:23.240705   12816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 16:56:23.262332   12816 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0416 16:56:23.262383   12816 cache_images.go:84] Images are preloaded, skipping loading
	I0416 16:56:23.262383   12816 kubeadm.go:928] updating node { 172.19.81.207 8443 v1.29.3 docker true true} ...
	I0416 16:56:23.262383   12816 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-022600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.81.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 16:56:23.270008   12816 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0416 16:56:23.307277   12816 cni.go:84] Creating CNI manager for ""
	I0416 16:56:23.307277   12816 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0416 16:56:23.307362   12816 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 16:56:23.307406   12816 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.81.207 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-022600 NodeName:ha-022600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.81.207"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.81.207 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 16:56:23.307691   12816 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.81.207
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-022600"
	  kubeletExtraArgs:
	    node-ip: 172.19.81.207
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.81.207"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 16:56:23.307749   12816 kube-vip.go:111] generating kube-vip config ...
	I0416 16:56:23.318492   12816 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0416 16:56:23.343950   12816 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0416 16:56:23.344258   12816 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.95.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0416 16:56:23.353585   12816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 16:56:23.370542   12816 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 16:56:23.379813   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0416 16:56:23.397865   12816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0416 16:56:23.432291   12816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 16:56:23.462868   12816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0416 16:56:23.492579   12816 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1351 bytes)
	I0416 16:56:23.534977   12816 ssh_runner.go:195] Run: grep 172.19.95.254	control-plane.minikube.internal$ /etc/hosts
	I0416 16:56:23.542734   12816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.95.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:56:23.575719   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:23.754395   12816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 16:56:23.781462   12816 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600 for IP: 172.19.81.207
	I0416 16:56:23.781462   12816 certs.go:194] generating shared ca certs ...
	I0416 16:56:23.781462   12816 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:23.782411   12816 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0416 16:56:23.782411   12816 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0416 16:56:23.783651   12816 certs.go:256] generating profile certs ...
	I0416 16:56:23.784402   12816 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.key
	I0416 16:56:23.784569   12816 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.crt with IP's: []
	I0416 16:56:23.984047   12816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.crt ...
	I0416 16:56:23.984047   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.crt: {Name:mk3ebdcb7f076a09a399313f7ed3edf14403a6a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:23.985977   12816 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.key ...
	I0416 16:56:23.985977   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.key: {Name:mk94343a485b04f4b25a0ccd3245e197e7ecbec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:23.986215   12816 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648
	I0416 16:56:23.987265   12816 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.81.207 172.19.95.254]
	I0416 16:56:24.317716   12816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648 ...
	I0416 16:56:24.317716   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648: {Name:mk30f7000427979a1bcf8d6fc3995d1f7ccc170c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.319797   12816 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648 ...
	I0416 16:56:24.319797   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648: {Name:mk95e9e3e0f96031ef005f6c36470c216303a111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.320163   12816 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt
	I0416 16:56:24.331288   12816 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key
	I0416 16:56:24.332214   12816 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key
	I0416 16:56:24.332214   12816 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt with IP's: []
	I0416 16:56:24.406574   12816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt ...
	I0416 16:56:24.406574   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt: {Name:mk73158a02cd8861e90a3b76d50704b360c358ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.407013   12816 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key ...
	I0416 16:56:24.407013   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key: {Name:mk6842e2af8fadaf278ec7592edd5bd96f07c8ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.408078   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 16:56:24.408945   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0416 16:56:24.409732   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0416 16:56:24.417870   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0416 16:56:24.418145   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem (1338 bytes)
	W0416 16:56:24.418533   12816 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324_empty.pem, impossibly tiny 0 bytes
	I0416 16:56:24.418533   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0416 16:56:24.418811   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0416 16:56:24.418990   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0416 16:56:24.419161   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0416 16:56:24.419368   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem (1708 bytes)
	I0416 16:56:24.419647   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /usr/share/ca-certificates/53242.pem
	I0416 16:56:24.419767   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:24.419867   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem -> /usr/share/ca-certificates/5324.pem
	I0416 16:56:24.420003   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 16:56:24.466985   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 16:56:24.509816   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 16:56:24.554817   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 16:56:24.603006   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0416 16:56:24.646596   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 16:56:24.694120   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 16:56:24.741669   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 16:56:24.785888   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /usr/share/ca-certificates/53242.pem (1708 bytes)
	I0416 16:56:24.829403   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 16:56:24.891821   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem --> /usr/share/ca-certificates/5324.pem (1338 bytes)
	I0416 16:56:24.933883   12816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 16:56:24.975091   12816 ssh_runner.go:195] Run: openssl version
	I0416 16:56:24.994129   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/53242.pem && ln -fs /usr/share/ca-certificates/53242.pem /etc/ssl/certs/53242.pem"
	I0416 16:56:25.021821   12816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/53242.pem
	I0416 16:56:25.028512   12816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:35 /usr/share/ca-certificates/53242.pem
	I0416 16:56:25.037989   12816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/53242.pem
	I0416 16:56:25.054924   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/53242.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 16:56:25.080011   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 16:56:25.106815   12816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:25.113980   12816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:25.126339   12816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:25.144599   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 16:56:25.170309   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5324.pem && ln -fs /usr/share/ca-certificates/5324.pem /etc/ssl/certs/5324.pem"
	I0416 16:56:25.199080   12816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5324.pem
	I0416 16:56:25.206080   12816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:35 /usr/share/ca-certificates/5324.pem
	I0416 16:56:25.214031   12816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5324.pem
	I0416 16:56:25.237026   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5324.pem /etc/ssl/certs/51391683.0"
	I0416 16:56:25.266837   12816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 16:56:25.273408   12816 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 16:56:25.273858   12816 kubeadm.go:391] StartCluster: {Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clu
sterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:56:25.281991   12816 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0416 16:56:25.314891   12816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0416 16:56:25.342248   12816 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 16:56:25.368032   12816 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 16:56:25.385737   12816 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 16:56:25.385737   12816 kubeadm.go:156] found existing configuration files:
	
	I0416 16:56:25.393851   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 16:56:25.410393   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 16:56:25.421874   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 16:56:25.453762   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 16:56:25.468769   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 16:56:25.477353   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 16:56:25.501898   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 16:56:25.515888   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 16:56:25.524885   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 16:56:25.548518   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 16:56:25.563660   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 16:56:25.572269   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 16:56:25.587981   12816 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 16:56:25.791977   12816 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 16:56:25.791977   12816 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 16:56:25.958638   12816 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 16:56:25.959035   12816 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 16:56:25.959403   12816 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 16:56:26.228464   12816 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 16:56:26.229544   12816 out.go:204]   - Generating certificates and keys ...
	I0416 16:56:26.229862   12816 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 16:56:26.230882   12816 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 16:56:26.359024   12816 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0416 16:56:26.583044   12816 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0416 16:56:26.715543   12816 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0416 16:56:27.014892   12816 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0416 16:56:27.414264   12816 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0416 16:56:27.414467   12816 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-022600 localhost] and IPs [172.19.81.207 127.0.0.1 ::1]
	I0416 16:56:27.642396   12816 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0416 16:56:27.642770   12816 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-022600 localhost] and IPs [172.19.81.207 127.0.0.1 ::1]
	I0416 16:56:27.844566   12816 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0416 16:56:28.089475   12816 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0416 16:56:28.543900   12816 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0416 16:56:28.548586   12816 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 16:56:29.051829   12816 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 16:56:29.485679   12816 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 16:56:29.830737   12816 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 16:56:30.055972   12816 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 16:56:30.305118   12816 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 16:56:30.305118   12816 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 16:56:30.310446   12816 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 16:56:30.311113   12816 out.go:204]   - Booting up control plane ...
	I0416 16:56:30.311289   12816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 16:56:30.311970   12816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 16:56:30.317049   12816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 16:56:30.342443   12816 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 16:56:30.345140   12816 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 16:56:30.345140   12816 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 16:56:30.526725   12816 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 16:56:37.142045   12816 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.615653 seconds
	I0416 16:56:37.159025   12816 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 16:56:37.175108   12816 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 16:56:37.707867   12816 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 16:56:37.708715   12816 kubeadm.go:309] [mark-control-plane] Marking the node ha-022600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 16:56:38.222729   12816 kubeadm.go:309] [bootstrap-token] Using token: a3r5qn.ikva200bfcppykg5
	I0416 16:56:38.223819   12816 out.go:204]   - Configuring RBAC rules ...
	I0416 16:56:38.224231   12816 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 16:56:38.232416   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 16:56:38.244982   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 16:56:38.249926   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 16:56:38.257723   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 16:56:38.262029   12816 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 16:56:38.279883   12816 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 16:56:38.592701   12816 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 16:56:38.638273   12816 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 16:56:38.639572   12816 kubeadm.go:309] 
	I0416 16:56:38.640154   12816 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 16:56:38.640230   12816 kubeadm.go:309] 
	I0416 16:56:38.640982   12816 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 16:56:38.641038   12816 kubeadm.go:309] 
	I0416 16:56:38.641299   12816 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 16:56:38.641581   12816 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 16:56:38.641765   12816 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 16:56:38.641765   12816 kubeadm.go:309] 
	I0416 16:56:38.641989   12816 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 16:56:38.642031   12816 kubeadm.go:309] 
	I0416 16:56:38.642184   12816 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 16:56:38.642228   12816 kubeadm.go:309] 
	I0416 16:56:38.642350   12816 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 16:56:38.642660   12816 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 16:56:38.642862   12816 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 16:56:38.642900   12816 kubeadm.go:309] 
	I0416 16:56:38.643166   12816 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 16:56:38.643426   12816 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 16:56:38.643426   12816 kubeadm.go:309] 
	I0416 16:56:38.643613   12816 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token a3r5qn.ikva200bfcppykg5 \
	I0416 16:56:38.643867   12816 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c \
	I0416 16:56:38.643909   12816 kubeadm.go:309] 	--control-plane 
	I0416 16:56:38.643961   12816 kubeadm.go:309] 
	I0416 16:56:38.644233   12816 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 16:56:38.644272   12816 kubeadm.go:309] 
	I0416 16:56:38.644444   12816 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token a3r5qn.ikva200bfcppykg5 \
	I0416 16:56:38.644734   12816 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c 
	I0416 16:56:38.647455   12816 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 16:56:38.647488   12816 cni.go:84] Creating CNI manager for ""
	I0416 16:56:38.647539   12816 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0416 16:56:38.648246   12816 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0416 16:56:38.657141   12816 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0416 16:56:38.671263   12816 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0416 16:56:38.671263   12816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0416 16:56:38.722410   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0416 16:56:39.265655   12816 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 16:56:39.279279   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:39.279279   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-022600 minikube.k8s.io/updated_at=2024_04_16T16_56_39_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4 minikube.k8s.io/name=ha-022600 minikube.k8s.io/primary=true
	I0416 16:56:39.290244   12816 ops.go:34] apiserver oom_adj: -16
	I0416 16:56:39.441163   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:39.950155   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:40.453751   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:40.955147   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:41.455931   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:41.953044   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:42.454696   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:42.949299   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:43.454962   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:43.953447   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:44.456402   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:44.956686   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:45.449476   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:45.951602   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:46.451988   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:46.949212   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:47.449356   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:47.950703   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:48.458777   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:48.956811   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:49.456669   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:49.943595   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:50.443906   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:50.950503   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:51.454863   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:51.944285   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:52.083562   12816 kubeadm.go:1107] duration metric: took 12.8170858s to wait for elevateKubeSystemPrivileges
	W0416 16:56:52.083816   12816 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 16:56:52.083816   12816 kubeadm.go:393] duration metric: took 26.808438s to StartCluster
	I0416 16:56:52.083816   12816 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:52.084214   12816 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:56:52.086643   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:52.088384   12816 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 16:56:52.088384   12816 start.go:240] waiting for startup goroutines ...
	I0416 16:56:52.088384   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0416 16:56:52.088384   12816 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 16:56:52.088630   12816 addons.go:69] Setting storage-provisioner=true in profile "ha-022600"
	I0416 16:56:52.088732   12816 addons.go:234] Setting addon storage-provisioner=true in "ha-022600"
	I0416 16:56:52.088732   12816 addons.go:69] Setting default-storageclass=true in profile "ha-022600"
	I0416 16:56:52.088850   12816 host.go:66] Checking if "ha-022600" exists ...
	I0416 16:56:52.088964   12816 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-022600"
	I0416 16:56:52.088964   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:56:52.090289   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:52.090671   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:52.207597   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.80.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0416 16:56:52.469504   12816 start.go:946] {"host.minikube.internal": 172.19.80.1} host record injected into CoreDNS's ConfigMap
	I0416 16:56:54.164683   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:54.164683   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:54.165583   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:54.165635   12816 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:56:54.165635   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:54.166734   12816 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 16:56:54.166340   12816 kapi.go:59] client config for ha-022600: &rest.Config{Host:"https://172.19.95.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-022600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-022600\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef16c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 16:56:54.167133   12816 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 16:56:54.167133   12816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 16:56:54.167133   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:54.167791   12816 cert_rotation.go:137] Starting client certificate rotation controller
	I0416 16:56:54.168180   12816 addons.go:234] Setting addon default-storageclass=true in "ha-022600"
	I0416 16:56:54.168347   12816 host.go:66] Checking if "ha-022600" exists ...
	I0416 16:56:54.169251   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:56.312581   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:56.312988   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:56.313046   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:56.313270   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:56.313270   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:56.313270   12816 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 16:56:56.313270   12816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 16:56:56.313270   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:58.330392   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:58.330392   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:58.330966   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:58.735727   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:56:58.735876   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:58.736103   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:56:58.898469   12816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 16:57:00.675802   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:57:00.675802   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:00.676245   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:57:00.828151   12816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 16:57:01.248041   12816 round_trippers.go:463] GET https://172.19.95.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0416 16:57:01.248041   12816 round_trippers.go:469] Request Headers:
	I0416 16:57:01.248041   12816 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:57:01.248041   12816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:57:01.261890   12816 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0416 16:57:01.262478   12816 round_trippers.go:463] PUT https://172.19.95.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0416 16:57:01.262478   12816 round_trippers.go:469] Request Headers:
	I0416 16:57:01.262478   12816 round_trippers.go:473]     Content-Type: application/json
	I0416 16:57:01.262478   12816 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:57:01.262478   12816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:57:01.268964   12816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 16:57:01.269995   12816 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0416 16:57:01.270495   12816 addons.go:505] duration metric: took 9.181591s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0416 16:57:01.270576   12816 start.go:245] waiting for cluster config update ...
	I0416 16:57:01.270618   12816 start.go:254] writing updated cluster config ...
	I0416 16:57:01.271859   12816 out.go:177] 
	I0416 16:57:01.284169   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:57:01.284169   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:57:01.285951   12816 out.go:177] * Starting "ha-022600-m02" control-plane node in "ha-022600" cluster
	I0416 16:57:01.286952   12816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:57:01.286952   12816 cache.go:56] Caching tarball of preloaded images
	I0416 16:57:01.286952   12816 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 16:57:01.286952   12816 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 16:57:01.286952   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:57:01.296247   12816 start.go:360] acquireMachinesLock for ha-022600-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:57:01.297324   12816 start.go:364] duration metric: took 1.0773ms to acquireMachinesLock for "ha-022600-m02"
	I0416 16:57:01.297559   12816 start.go:93] Provisioning new machine with config: &{Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 16:57:01.297559   12816 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0416 16:57:01.297559   12816 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 16:57:01.297559   12816 start.go:159] libmachine.API.Create for "ha-022600" (driver="hyperv")
	I0416 16:57:01.297559   12816 client.go:168] LocalClient.Create starting
	I0416 16:57:01.298838   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0416 16:57:01.299147   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:57:01.299147   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:57:01.299293   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0416 16:57:01.299468   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:57:01.299468   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:57:01.299468   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0416 16:57:03.017072   12816 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0416 16:57:03.017279   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:03.017366   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0416 16:57:04.580895   12816 main.go:141] libmachine: [stdout =====>] : False
	
	I0416 16:57:04.580895   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:04.580895   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:57:05.984295   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:57:05.984295   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:05.984295   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:57:09.314760   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:57:09.314760   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:09.316740   12816 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 16:57:09.669552   12816 main.go:141] libmachine: Creating SSH key...
	I0416 16:57:10.010472   12816 main.go:141] libmachine: Creating VM...
	I0416 16:57:10.010472   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:57:12.690022   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:57:12.690022   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:12.690107   12816 main.go:141] libmachine: Using switch "Default Switch"
	I0416 16:57:12.690185   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:57:14.267157   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:57:14.267157   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:14.267157   12816 main.go:141] libmachine: Creating VHD
	I0416 16:57:14.267157   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0416 16:57:17.749511   12816 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : FE960248-03C1-43D6-B7AE-A60D4C86308B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0416 16:57:17.749511   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:17.749511   12816 main.go:141] libmachine: Writing magic tar header
	I0416 16:57:17.749511   12816 main.go:141] libmachine: Writing SSH key tar header
	I0416 16:57:17.758158   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0416 16:57:20.709379   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:20.709950   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:20.710019   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\disk.vhd' -SizeBytes 20000MB
	I0416 16:57:23.025729   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:23.025729   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:23.025729   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-022600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0416 16:57:26.131923   12816 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-022600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0416 16:57:26.131923   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:26.131923   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-022600-m02 -DynamicMemoryEnabled $false
	I0416 16:57:28.159153   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:28.159229   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:28.159409   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-022600-m02 -Count 2
	I0416 16:57:30.126033   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:30.126033   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:30.126033   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-022600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\boot2docker.iso'
	I0416 16:57:32.420739   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:32.421735   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:32.421878   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-022600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\disk.vhd'
	I0416 16:57:34.779822   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:34.779822   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:34.779822   12816 main.go:141] libmachine: Starting VM...
	I0416 16:57:34.780971   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-022600-m02
	I0416 16:57:37.369505   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:37.369687   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:37.369687   12816 main.go:141] libmachine: Waiting for host to start...
	I0416 16:57:37.369767   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:39.415029   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:39.415029   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:39.415286   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:41.685132   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:41.685132   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:42.700464   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:44.674039   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:44.674039   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:44.674039   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:46.993492   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:46.993492   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:48.000886   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:49.992438   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:49.992894   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:49.992930   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:52.274971   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:52.274971   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:53.290891   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:55.287716   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:55.287962   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:55.288037   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:57.564053   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:57.564053   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:58.572803   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:00.584542   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:00.584542   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:00.584542   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:02.905327   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:02.905327   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:02.905391   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:04.899133   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:04.899133   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:04.899479   12816 machine.go:94] provisionDockerMachine start ...
	I0416 16:58:04.899479   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:06.914221   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:06.914221   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:06.914869   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:09.273511   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:09.273546   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:09.277783   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:09.278406   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:09.278406   12816 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 16:58:09.413281   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 16:58:09.413281   12816 buildroot.go:166] provisioning hostname "ha-022600-m02"
	I0416 16:58:09.413281   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:11.438626   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:11.438626   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:11.439079   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:13.801181   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:13.801181   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:13.805295   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:13.805684   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:13.805684   12816 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-022600-m02 && echo "ha-022600-m02" | sudo tee /etc/hostname
	I0416 16:58:13.957933   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-022600-m02
	
	I0416 16:58:13.958021   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:15.863768   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:15.863768   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:15.863768   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:18.176996   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:18.178002   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:18.182057   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:18.182681   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:18.182681   12816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-022600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-022600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-022600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:58:18.315751   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:58:18.315853   12816 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 16:58:18.315853   12816 buildroot.go:174] setting up certificates
	I0416 16:58:18.315853   12816 provision.go:84] configureAuth start
	I0416 16:58:18.315853   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:20.243862   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:20.243928   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:20.243928   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:22.525833   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:22.525945   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:22.526057   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:24.418671   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:24.418671   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:24.418894   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:26.735560   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:26.735560   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:26.735560   12816 provision.go:143] copyHostCerts
	I0416 16:58:26.736546   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 16:58:26.736627   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 16:58:26.736627   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 16:58:26.736627   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 16:58:26.737290   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 16:58:26.737900   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 16:58:26.737900   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 16:58:26.738191   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 16:58:26.738908   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 16:58:26.738977   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 16:58:26.738977   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 16:58:26.738977   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 16:58:26.739652   12816 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-022600-m02 san=[127.0.0.1 172.19.80.125 ha-022600-m02 localhost minikube]
	I0416 16:58:26.917277   12816 provision.go:177] copyRemoteCerts
	I0416 16:58:26.926308   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:58:26.926600   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:28.829360   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:28.829360   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:28.830343   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:31.113681   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:31.113681   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:31.113681   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:58:31.229222   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3026703s)
	I0416 16:58:31.229222   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 16:58:31.229700   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 16:58:31.279666   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 16:58:31.280307   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0416 16:58:31.328101   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 16:58:31.328245   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 16:58:31.382563   12816 provision.go:87] duration metric: took 13.065969s to configureAuth
	I0416 16:58:31.382563   12816 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:58:31.383343   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:58:31.383343   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:33.331199   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:33.331199   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:33.331275   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:35.653673   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:35.653721   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:35.656855   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:35.657430   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:35.657430   12816 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 16:58:35.781565   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 16:58:35.781565   12816 buildroot.go:70] root file system type: tmpfs
	I0416 16:58:35.781565   12816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 16:58:35.782090   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:37.695478   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:37.695478   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:37.696344   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:39.956169   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:39.956169   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:39.961057   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:39.961515   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:39.961564   12816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.81.207"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 16:58:40.123664   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.81.207
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 16:58:40.123818   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:42.064878   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:42.064974   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:42.065152   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:44.326252   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:44.326252   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:44.330103   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:44.330731   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:44.330731   12816 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 16:58:46.283136   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 16:58:46.283253   12816 machine.go:97] duration metric: took 41.3814279s to provisionDockerMachine
	I0416 16:58:46.283253   12816 client.go:171] duration metric: took 1m44.9797412s to LocalClient.Create
	I0416 16:58:46.283253   12816 start.go:167] duration metric: took 1m44.9797412s to libmachine.API.Create "ha-022600"
	I0416 16:58:46.283253   12816 start.go:293] postStartSetup for "ha-022600-m02" (driver="hyperv")
	I0416 16:58:46.283345   12816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:58:46.292724   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:58:46.292724   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:48.207625   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:48.207625   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:48.207625   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:50.480821   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:50.480821   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:50.480821   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:58:50.575284   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2823171s)
	I0416 16:58:50.584260   12816 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:58:50.591292   12816 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:58:50.591292   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 16:58:50.591292   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 16:58:50.591900   12816 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 16:58:50.591900   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 16:58:50.601073   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 16:58:50.618807   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 16:58:50.671301   12816 start.go:296] duration metric: took 4.3877068s for postStartSetup
	I0416 16:58:50.673161   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:52.621684   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:52.622252   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:52.622252   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:54.923435   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:54.923435   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:54.923763   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:58:54.926483   12816 start.go:128] duration metric: took 1m53.622481s to createHost
	I0416 16:58:54.926657   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:56.793105   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:56.793105   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:56.793184   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:59.024255   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:59.025184   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:59.029108   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:59.029633   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:59.029730   12816 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 16:58:59.149333   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713286739.315259098
	
	I0416 16:58:59.149333   12816 fix.go:216] guest clock: 1713286739.315259098
	I0416 16:58:59.149333   12816 fix.go:229] Guest: 2024-04-16 16:58:59.315259098 +0000 UTC Remote: 2024-04-16 16:58:54.9265716 +0000 UTC m=+304.925199701 (delta=4.388687498s)
	I0416 16:58:59.149333   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:01.054656   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:01.054831   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:01.054831   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:03.303195   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:03.303195   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:03.307071   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:59:03.307459   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:59:03.307531   12816 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713286739
	I0416 16:59:03.449024   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 16:58:59 UTC 2024
	
	I0416 16:59:03.449024   12816 fix.go:236] clock set: Tue Apr 16 16:58:59 UTC 2024
	 (err=<nil>)
	I0416 16:59:03.449024   12816 start.go:83] releasing machines lock for "ha-022600-m02", held for 2m2.1447745s
	I0416 16:59:03.450039   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:05.434998   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:05.434998   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:05.434998   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:07.737918   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:07.737918   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:07.739042   12816 out.go:177] * Found network options:
	I0416 16:59:07.739784   12816 out.go:177]   - NO_PROXY=172.19.81.207
	W0416 16:59:07.740404   12816 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 16:59:07.741027   12816 out.go:177]   - NO_PROXY=172.19.81.207
	W0416 16:59:07.741505   12816 proxy.go:119] fail to check proxy env: Error ip not in block
	W0416 16:59:07.742708   12816 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 16:59:07.744988   12816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:59:07.745153   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:07.752817   12816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0416 16:59:07.752817   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:09.758953   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:12.157582   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:12.158536   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:12.159044   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:59:12.184719   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:12.184719   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:12.185179   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:59:12.257436   12816 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5043642s)
	W0416 16:59:12.257436   12816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:59:12.266545   12816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:59:12.367206   12816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 16:59:12.367296   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:59:12.367330   12816 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6219642s)
	I0416 16:59:12.367330   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:59:12.423201   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 16:59:12.453988   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 16:59:12.472992   12816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 16:59:12.482991   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 16:59:12.510864   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:59:12.538866   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 16:59:12.565866   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:59:12.597751   12816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:59:12.622761   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 16:59:12.648905   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 16:59:12.674904   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 16:59:12.713452   12816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:59:12.741495   12816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:59:12.768497   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:59:12.975524   12816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 16:59:13.011635   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:59:13.023647   12816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 16:59:13.058146   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:59:13.091991   12816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 16:59:13.139058   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:59:13.173081   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:59:13.208242   12816 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 16:59:13.259511   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:59:13.282094   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:59:13.329081   12816 ssh_runner.go:195] Run: which cri-dockerd
	I0416 16:59:13.344832   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 16:59:13.362131   12816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 16:59:13.403377   12816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 16:59:13.597444   12816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 16:59:13.768147   12816 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 16:59:13.768278   12816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 16:59:13.808294   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:59:13.987216   12816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 17:00:15.104612   12816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1138396s)
	I0416 17:00:15.115049   12816 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0416 17:00:15.145752   12816 out.go:177] 
	W0416 17:00:15.146473   12816 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 16 16:58:45 ha-022600-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.076842920Z" level=info msg="Starting up"
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.077687177Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.078706068Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.109665355Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138411128Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138448735Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138508447Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138523049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138600164Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138632670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138848110Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138955930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139030244Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139045347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139142365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139433520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142495192Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142588309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142778845Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142795748Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143044695Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143174419Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143191422Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.152862930Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153144583Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153313214Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153337519Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153354522Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153467543Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153957434Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154159572Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154195179Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154212082Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154230586Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154258491Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154272393Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154287696Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154303599Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154317302Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154330504Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154344107Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154373612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154392516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154406618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154421121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154434024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154447526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154460128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154474031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154498536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154514539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154525841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154555046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154568249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154583952Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154604755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154629960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154642062Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154700973Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154916114Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155014532Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155030135Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155203567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155302486Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155325090Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155706861Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155796078Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155907599Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155947306Z" level=info msg="containerd successfully booted in 0.047582s"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.119001526Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.129323458Z" level=info msg="Loading containers: start."
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.358382320Z" level=info msg="Loading containers: done."
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.377033580Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.377149301Z" level=info msg="Daemon has completed initialization"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.447556885Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 16:58:46 ha-022600-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.449134569Z" level=info msg="API listen on [::]:2376"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.178053148Z" level=info msg="Processing signal 'terminated'"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.179830517Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 16:59:14 ha-022600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.180814055Z" level=info msg="Daemon shutdown complete"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.181020363Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.181054564Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 16:59:15 ha-022600-m02 dockerd[1019]: time="2024-04-16T16:59:15.248212596Z" level=info msg="Starting up"
	Apr 16 17:00:15 ha-022600-m02 dockerd[1019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0416 17:00:15.146611   12816 out.go:239] * 
	W0416 17:00:15.147806   12816 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 17:00:15.148383   12816 out.go:177] 
	
	
	==> Docker <==
	Apr 16 16:57:05 ha-022600 dockerd[1331]: time="2024-04-16T16:57:05.920921751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 16:57:05 ha-022600 dockerd[1331]: time="2024-04-16T16:57:05.921347872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 16:57:06 ha-022600 dockerd[1331]: time="2024-04-16T16:57:06.015075707Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 16:57:06 ha-022600 dockerd[1331]: time="2024-04-16T16:57:06.015236816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 16:57:06 ha-022600 dockerd[1331]: time="2024-04-16T16:57:06.015340122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 16:57:06 ha-022600 dockerd[1331]: time="2024-04-16T16:57:06.015511532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 17:00:33 ha-022600 dockerd[1325]: 2024/04/16 17:00:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:00:33 ha-022600 dockerd[1325]: 2024/04/16 17:00:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:00:33 ha-022600 dockerd[1325]: 2024/04/16 17:00:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:00:33 ha-022600 dockerd[1325]: 2024/04/16 17:00:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:00:33 ha-022600 dockerd[1325]: 2024/04/16 17:00:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:00:33 ha-022600 dockerd[1325]: 2024/04/16 17:00:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:00:33 ha-022600 dockerd[1325]: 2024/04/16 17:00:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:00:33 ha-022600 dockerd[1325]: 2024/04/16 17:00:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:00:33 ha-022600 dockerd[1325]: 2024/04/16 17:00:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:00:46 ha-022600 dockerd[1331]: time="2024-04-16T17:00:46.155449869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 17:00:46 ha-022600 dockerd[1331]: time="2024-04-16T17:00:46.155537174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 17:00:46 ha-022600 dockerd[1331]: time="2024-04-16T17:00:46.155576176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 17:00:46 ha-022600 dockerd[1331]: time="2024-04-16T17:00:46.156267612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 17:00:46 ha-022600 cri-dockerd[1232]: time="2024-04-16T17:00:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8a4de3aa24af1283627968e3b5972a40e7430994e81c6c1dc2f08b918b9b3ce1/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 16 17:00:47 ha-022600 cri-dockerd[1232]: time="2024-04-16T17:00:47Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 16 17:00:47 ha-022600 dockerd[1331]: time="2024-04-16T17:00:47.477180079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 17:00:47 ha-022600 dockerd[1331]: time="2024-04-16T17:00:47.477294385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 17:00:47 ha-022600 dockerd[1331]: time="2024-04-16T17:00:47.477311186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 17:00:47 ha-022600 dockerd[1331]: time="2024-04-16T17:00:47.478281439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d38b1a5f4caa8       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   8a4de3aa24af1       busybox-7fdf7869d9-rpfpf
	3fe545bfad4e6       cbb01a7bd410d                                                                                         16 minutes ago      Running             coredns                   0                   093278b3840ef       coredns-76f75df574-qm89x
	979dee88be2b4       cbb01a7bd410d                                                                                         16 minutes ago      Running             coredns                   0                   4ad38b0d59335       coredns-76f75df574-ww2r6
	257879ecf06b2       6e38f40d628db                                                                                         16 minutes ago      Running             storage-provisioner       0                   bf991c3e34e2d       storage-provisioner
	be245de9ef545       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              16 minutes ago      Running             kindnet-cni               0                   92c35b3fd0967       kindnet-mwqvl
	05db92f49e0df       a1d263b5dc5b0                                                                                         16 minutes ago      Running             kube-proxy                0                   12380f49c1509       kube-proxy-2vddt
	d1ba82cd26254       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     16 minutes ago      Running             kube-vip                  0                   fa2c75c4c8d59       kube-vip-ha-022600
	a7fb69539df62       6052a25da3f97                                                                                         16 minutes ago      Running             kube-controller-manager   0                   b536621e20d4b       kube-controller-manager-ha-022600
	4fd5df8c9fd37       39f995c9f1996                                                                                         16 minutes ago      Running             kube-apiserver            0                   5a7a1e80caeb4       kube-apiserver-ha-022600
	e042d71e8b0e8       8c390d98f50c0                                                                                         16 minutes ago      Running             kube-scheduler            0                   5a2551c91a1b6       kube-scheduler-ha-022600
	c29b0762ff0bf       3861cfcd7c04c                                                                                         16 minutes ago      Running             etcd                      0                   c8a9aa3126cf5       etcd-ha-022600
	
	
	==> coredns [3fe545bfad4e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = db872c9fdf31f8d8ff61123f2a1e38a38b951fa043b9e42cdb76f01d23889e560885a7bdef735e757fd28e65f13e44b1d5d7b5def31861f6a98cd0279fbc18c8
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:55981 - 4765 "HINFO IN 3735046377920793891.8143170502200932773. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.058936595s
	[INFO] 10.244.0.4:43350 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000388921s
	[INFO] 10.244.0.4:35317 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.052221997s
	[INFO] 10.244.0.4:52074 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.040069369s
	[INFO] 10.244.0.4:49068 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.053312593s
	[INFO] 10.244.0.4:54711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123507s
	[INFO] 10.244.0.4:44694 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037006811s
	[INFO] 10.244.0.4:33399 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124606s
	[INFO] 10.244.0.4:37329 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000241612s
	[INFO] 10.244.0.4:57333 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131407s
	[INFO] 10.244.0.4:38806 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060403s
	
	
	==> coredns [979dee88be2b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = db872c9fdf31f8d8ff61123f2a1e38a38b951fa043b9e42cdb76f01d23889e560885a7bdef735e757fd28e65f13e44b1d5d7b5def31861f6a98cd0279fbc18c8
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50127 - 24072 "HINFO IN 7665836187497317301.2285362183679153792. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027543487s
	[INFO] 10.244.0.4:34822 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000224011s
	[INFO] 10.244.0.4:48911 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000349218s
	[INFO] 10.244.0.4:43369 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.023699624s
	[INFO] 10.244.0.4:56309 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000258914s
	[INFO] 10.244.0.4:36791 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.003463479s
	[INFO] 10.244.0.4:55996 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000301816s
	
	
	==> describe nodes <==
	Name:               ha-022600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-022600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=ha-022600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T16_56_39_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 16:56:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-022600
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 17:12:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 17:11:27 +0000   Tue, 16 Apr 2024 16:56:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 17:11:27 +0000   Tue, 16 Apr 2024 16:56:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 17:11:27 +0000   Tue, 16 Apr 2024 16:56:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 17:11:27 +0000   Tue, 16 Apr 2024 16:57:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.81.207
	  Hostname:    ha-022600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b4674338fa494bbcb2e21e2b4385c5e1
	  System UUID:                201025fc-0c03-cc49-a194-29d6500971a2
	  Boot ID:                    6ae5bedd-6e8e-4f58-b08c-8e9912fd04de
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-rpfpf             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-76f75df574-qm89x             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-76f75df574-ww2r6             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-022600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-mwqvl                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-022600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-022600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-2vddt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-022600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-022600                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node ha-022600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node ha-022600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node ha-022600 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m   node-controller  Node ha-022600 event: Registered Node ha-022600 in Controller
	  Normal  NodeReady                16m   kubelet          Node ha-022600 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.656516] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr16 16:55] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.165290] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[Apr16 16:56] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.091843] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.493988] systemd-fstab-generator[987]: Ignoring "noauto" option for root device
	[  +0.172637] systemd-fstab-generator[999]: Ignoring "noauto" option for root device
	[  +0.230010] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	[  +2.695048] systemd-fstab-generator[1185]: Ignoring "noauto" option for root device
	[  +0.219400] systemd-fstab-generator[1197]: Ignoring "noauto" option for root device
	[  +0.196554] systemd-fstab-generator[1209]: Ignoring "noauto" option for root device
	[  +0.267217] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
	[ +11.053282] systemd-fstab-generator[1317]: Ignoring "noauto" option for root device
	[  +0.095458] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.012264] systemd-fstab-generator[1521]: Ignoring "noauto" option for root device
	[  +6.758798] systemd-fstab-generator[1720]: Ignoring "noauto" option for root device
	[  +0.093227] kauditd_printk_skb: 73 callbacks suppressed
	[  +7.850543] systemd-fstab-generator[2213]: Ignoring "noauto" option for root device
	[  +0.130310] kauditd_printk_skb: 72 callbacks suppressed
	[ +15.381320] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.386371] kauditd_printk_skb: 29 callbacks suppressed
	[Apr16 17:00] hrtimer: interrupt took 5042261 ns
	[  +0.908827] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [c29b0762ff0b] <==
	{"level":"info","ts":"2024-04-16T16:56:33.030826Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fac5e7781389861 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-16T16:56:33.031001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fac5e7781389861 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-16T16:56:33.031092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fac5e7781389861 received MsgPreVoteResp from 6fac5e7781389861 at term 1"}
	{"level":"info","ts":"2024-04-16T16:56:33.031194Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fac5e7781389861 became candidate at term 2"}
	{"level":"info","ts":"2024-04-16T16:56:33.031296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fac5e7781389861 received MsgVoteResp from 6fac5e7781389861 at term 2"}
	{"level":"info","ts":"2024-04-16T16:56:33.031401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fac5e7781389861 became leader at term 2"}
	{"level":"info","ts":"2024-04-16T16:56:33.031482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6fac5e7781389861 elected leader 6fac5e7781389861 at term 2"}
	{"level":"info","ts":"2024-04-16T16:56:33.035895Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T16:56:33.039506Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"15aeb555476ef740","local-member-id":"6fac5e7781389861","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T16:56:33.039923Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T16:56:33.041955Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T16:56:33.042053Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"6fac5e7781389861","local-member-attributes":"{Name:ha-022600 ClientURLs:[https://172.19.81.207:2379]}","request-path":"/0/members/6fac5e7781389861/attributes","cluster-id":"15aeb555476ef740","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-16T16:56:33.042276Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T16:56:33.044336Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.81.207:2379"}
	{"level":"info","ts":"2024-04-16T16:56:33.052851Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T16:56:33.052893Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-16T16:56:33.055802Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-16T16:56:33.063928Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-16T16:57:01.41567Z","caller":"traceutil/trace.go:171","msg":"trace[1184878888] transaction","detail":"{read_only:false; response_revision:415; number_of_response:1; }","duration":"279.327005ms","start":"2024-04-16T16:57:01.136324Z","end":"2024-04-16T16:57:01.415651Z","steps":["trace[1184878888] 'process raft request'  (duration: 279.236301ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:06:33.350784Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":964}
	{"level":"info","ts":"2024-04-16T17:06:33.393755Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":964,"took":"42.49244ms","hash":1730924367,"current-db-size-bytes":2433024,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2433024,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-04-16T17:06:33.395361Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1730924367,"revision":964,"compact-revision":-1}
	{"level":"info","ts":"2024-04-16T17:11:33.360995Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1502}
	{"level":"info","ts":"2024-04-16T17:11:33.366072Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1502,"took":"4.116913ms","hash":127222243,"current-db-size-bytes":2433024,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1818624,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-04-16T17:11:33.366162Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":127222243,"revision":1502,"compact-revision":964}
	
	
	==> kernel <==
	 17:13:05 up 18 min,  0 users,  load average: 0.44, 0.27, 0.19
	Linux ha-022600 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [be245de9ef54] <==
	I0416 17:11:01.144549       1 main.go:227] handling current node
	I0416 17:11:11.151274       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:11:11.151629       1 main.go:227] handling current node
	I0416 17:11:21.165243       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:11:21.165275       1 main.go:227] handling current node
	I0416 17:11:31.170463       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:11:31.170576       1 main.go:227] handling current node
	I0416 17:11:41.183620       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:11:41.183707       1 main.go:227] handling current node
	I0416 17:11:51.197478       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:11:51.197581       1 main.go:227] handling current node
	I0416 17:12:01.211572       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:12:01.212259       1 main.go:227] handling current node
	I0416 17:12:11.222555       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:12:11.222612       1 main.go:227] handling current node
	I0416 17:12:21.232171       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:12:21.232252       1 main.go:227] handling current node
	I0416 17:12:31.241317       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:12:31.241431       1 main.go:227] handling current node
	I0416 17:12:41.254415       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:12:41.254517       1 main.go:227] handling current node
	I0416 17:12:51.270840       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:12:51.270884       1 main.go:227] handling current node
	I0416 17:13:01.279901       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:13:01.279950       1 main.go:227] handling current node
	
	
	==> kube-apiserver [4fd5df8c9fd3] <==
	I0416 16:56:35.504221       1 shared_informer.go:318] Caches are synced for configmaps
	I0416 16:56:35.510308       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0416 16:56:35.512679       1 aggregator.go:165] initial CRD sync complete...
	I0416 16:56:35.516211       1 autoregister_controller.go:141] Starting autoregister controller
	I0416 16:56:35.516249       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0416 16:56:35.516256       1 cache.go:39] Caches are synced for autoregister controller
	I0416 16:56:35.517473       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0416 16:56:35.522352       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0416 16:56:35.529558       1 controller.go:624] quota admission added evaluator for: namespaces
	I0416 16:56:35.542494       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0416 16:56:36.411016       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0416 16:56:36.418409       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0416 16:56:36.419376       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0416 16:56:37.172553       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0416 16:56:37.235069       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0416 16:56:37.370838       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0416 16:56:37.381797       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.81.207]
	I0416 16:56:37.383264       1 controller.go:624] quota admission added evaluator for: endpoints
	I0416 16:56:37.388718       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0416 16:56:37.435733       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0416 16:56:38.737496       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0416 16:56:38.764389       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0416 16:56:38.781093       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0416 16:56:51.466047       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0416 16:56:51.868826       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [a7fb69539df6] <==
	I0416 16:56:52.089701       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-qm89x"
	I0416 16:56:52.106572       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="633.961814ms"
	I0416 16:56:52.122316       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="15.686904ms"
	I0416 16:56:52.190000       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="67.640369ms"
	I0416 16:56:52.190122       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="63.603µs"
	I0416 16:57:04.964104       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="174.809µs"
	I0416 16:57:04.979092       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="587.33µs"
	I0416 16:57:04.995404       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="188.309µs"
	I0416 16:57:05.057328       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="96.005µs"
	I0416 16:57:05.964586       1 node_lifecycle_controller.go:1045] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0416 16:57:07.181900       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="167.009µs"
	I0416 16:57:07.224163       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="15.307781ms"
	I0416 16:57:07.224903       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="88.905µs"
	I0416 16:57:07.277301       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="19.898845ms"
	I0416 16:57:07.277810       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="53.303µs"
	I0416 17:00:45.709324       1 event.go:376] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7fdf7869d9 to 3"
	I0416 17:00:45.728545       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-rpfpf"
	I0416 17:00:45.745464       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-mnl84"
	I0416 17:00:45.756444       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-gph6r"
	I0416 17:00:45.770175       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="62.082711ms"
	I0416 17:00:45.784213       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="13.744211ms"
	I0416 17:00:45.810992       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="26.530372ms"
	I0416 17:00:45.811146       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="42.802µs"
	I0416 17:00:48.413892       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="8.465463ms"
	I0416 17:00:48.413981       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="40.302µs"
	
	
	==> kube-proxy [05db92f49e0d] <==
	I0416 16:56:54.468581       1 server_others.go:72] "Using iptables proxy"
	I0416 16:56:54.505964       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.81.207"]
	I0416 16:56:54.583838       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 16:56:54.584172       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 16:56:54.584273       1 server_others.go:168] "Using iptables Proxier"
	I0416 16:56:54.590060       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 16:56:54.590806       1 server.go:865] "Version info" version="v1.29.3"
	I0416 16:56:54.591014       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 16:56:54.592331       1 config.go:188] "Starting service config controller"
	I0416 16:56:54.592517       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 16:56:54.592625       1 config.go:97] "Starting endpoint slice config controller"
	I0416 16:56:54.592689       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 16:56:54.594058       1 config.go:315] "Starting node config controller"
	I0416 16:56:54.594215       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 16:56:54.693900       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 16:56:54.693964       1 shared_informer.go:318] Caches are synced for service config
	I0416 16:56:54.694328       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [e042d71e8b0e] <==
	W0416 16:56:36.501819       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0416 16:56:36.501922       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0416 16:56:36.507709       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0416 16:56:36.507948       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0416 16:56:36.573671       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 16:56:36.573877       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 16:56:36.602162       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 16:56:36.602205       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 16:56:36.621966       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.622272       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0416 16:56:36.648392       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0416 16:56:36.648623       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0416 16:56:36.694872       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 16:56:36.694970       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 16:56:36.804118       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.805424       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 16:56:36.821863       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.822231       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 16:56:36.866017       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 16:56:36.866298       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0416 16:56:36.904820       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 16:56:36.905097       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 16:56:36.917996       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.918036       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0416 16:56:39.298679       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 17:08:38 ha-022600 kubelet[2220]: E0416 17:08:38.992923    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:08:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:08:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:08:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:08:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:09:38 ha-022600 kubelet[2220]: E0416 17:09:38.995960    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:09:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:09:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:09:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:09:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:10:38 ha-022600 kubelet[2220]: E0416 17:10:38.996042    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:10:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:10:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:10:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:10:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:11:39 ha-022600 kubelet[2220]: E0416 17:11:39.004964    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:11:39 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:11:39 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:11:39 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:11:39 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:12:38 ha-022600 kubelet[2220]: E0416 17:12:38.993284    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:12:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:12:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:12:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:12:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [257879ecf06b] <==
	I0416 16:57:06.255455       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0416 16:57:06.278834       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0416 16:57:06.280824       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0416 16:57:06.296912       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0416 16:57:06.297990       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-022600_72f60c68-7530-4649-9313-75b903c805c4!
	I0416 16:57:06.297754       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e843972e-8d36-423a-bd47-42ea404826e6", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-022600_72f60c68-7530-4649-9313-75b903c805c4 became leader
	I0416 16:57:06.399840       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-022600_72f60c68-7530-4649-9313-75b903c805c4!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 17:12:58.561317    2868 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-022600 -n ha-022600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-022600 -n ha-022600: (11.0288146s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-022600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7fdf7869d9-gph6r busybox-7fdf7869d9-mnl84
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeployApp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-022600 describe pod busybox-7fdf7869d9-gph6r busybox-7fdf7869d9-mnl84
helpers_test.go:282: (dbg) kubectl --context ha-022600 describe pod busybox-7fdf7869d9-gph6r busybox-7fdf7869d9-mnl84:

                                                
                                                
-- stdout --
	Name:             busybox-7fdf7869d9-gph6r
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7fdf7869d9
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7fdf7869d9
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h29q5 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-h29q5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  2m7s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	
	
	Name:             busybox-7fdf7869d9-mnl84
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7fdf7869d9
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7fdf7869d9
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xhwqb (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-xhwqb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  2m7s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeployApp (752.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (41.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-022600 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-022600 -- exec busybox-7fdf7869d9-gph6r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:207: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-022600 -- exec busybox-7fdf7869d9-gph6r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (358.8118ms)

                                                
                                                
** stderr ** 
	W0416 17:13:18.180055    6924 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-7fdf7869d9-gph6r does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:209: Pod busybox-7fdf7869d9-gph6r could not resolve 'host.minikube.internal': exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-022600 -- exec busybox-7fdf7869d9-mnl84 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:207: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-022600 -- exec busybox-7fdf7869d9-mnl84 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (336.1205ms)

                                                
                                                
** stderr ** 
	W0416 17:13:18.522730   14136 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-7fdf7869d9-mnl84 does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:209: Pod busybox-7fdf7869d9-mnl84 could not resolve 'host.minikube.internal': exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-022600 -- exec busybox-7fdf7869d9-rpfpf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-022600 -- exec busybox-7fdf7869d9-rpfpf -- sh -c "ping -c 1 172.19.80.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-022600 -- exec busybox-7fdf7869d9-rpfpf -- sh -c "ping -c 1 172.19.80.1": exit status 1 (10.4068266s)

                                                
                                                
-- stdout --
	PING 172.19.80.1 (172.19.80.1): 56 data bytes
	
	--- 172.19.80.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 17:13:19.310001    6316 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.19.80.1) from pod (busybox-7fdf7869d9-rpfpf): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-022600 -n ha-022600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-022600 -n ha-022600: (10.8446271s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-022600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-022600 logs -n 25: (7.344033s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:10 UTC | 16 Apr 24 17:10 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:10 UTC | 16 Apr 24 17:10 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:10 UTC | 16 Apr 24 17:10 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:10 UTC | 16 Apr 24 17:10 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:11 UTC | 16 Apr 24 17:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:11 UTC | 16 Apr 24 17:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:11 UTC | 16 Apr 24 17:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:11 UTC | 16 Apr 24 17:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-gph6r --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-mnl84 --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | busybox-7fdf7869d9-rpfpf --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-gph6r --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-mnl84 --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | busybox-7fdf7869d9-rpfpf --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-gph6r -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-mnl84 -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | busybox-7fdf7869d9-rpfpf -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC | 16 Apr 24 17:13 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC |                     |
	|         | busybox-7fdf7869d9-gph6r             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC |                     |
	|         | busybox-7fdf7869d9-mnl84             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC | 16 Apr 24 17:13 UTC |
	|         | busybox-7fdf7869d9-rpfpf             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC |                     |
	|         | busybox-7fdf7869d9-rpfpf -- sh       |           |                   |                |                     |                     |
	|         | -c ping -c 1 172.19.80.1             |           |                   |                |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 16:53:50
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 16:53:50.116950   12816 out.go:291] Setting OutFile to fd 784 ...
	I0416 16:53:50.117952   12816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:53:50.117952   12816 out.go:304] Setting ErrFile to fd 696...
	I0416 16:53:50.117952   12816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:53:50.138920   12816 out.go:298] Setting JSON to false
	I0416 16:53:50.141501   12816 start.go:129] hostinfo: {"hostname":"minikube5","uptime":24059,"bootTime":1713262370,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0416 16:53:50.141501   12816 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0416 16:53:50.143700   12816 out.go:177] * [ha-022600] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0416 16:53:50.144387   12816 notify.go:220] Checking for updates...
	I0416 16:53:50.144982   12816 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:53:50.145881   12816 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 16:53:50.146373   12816 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0416 16:53:50.146987   12816 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 16:53:50.147788   12816 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 16:53:50.149250   12816 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 16:53:54.959514   12816 out.go:177] * Using the hyperv driver based on user configuration
	I0416 16:53:54.959811   12816 start.go:297] selected driver: hyperv
	I0416 16:53:54.959811   12816 start.go:901] validating driver "hyperv" against <nil>
	I0416 16:53:54.959811   12816 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 16:53:55.002641   12816 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 16:53:55.003374   12816 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 16:53:55.003816   12816 cni.go:84] Creating CNI manager for ""
	I0416 16:53:55.003816   12816 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0416 16:53:55.003816   12816 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0416 16:53:55.003816   12816 start.go:340] cluster config:
	{Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:53:55.003816   12816 iso.go:125] acquiring lock: {Name:mka3f8eef32f5becd06d05d1d837c2a92a8fa70c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 16:53:55.005700   12816 out.go:177] * Starting "ha-022600" primary control-plane node in "ha-022600" cluster
	I0416 16:53:55.006053   12816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:53:55.006397   12816 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0416 16:53:55.006397   12816 cache.go:56] Caching tarball of preloaded images
	I0416 16:53:55.006539   12816 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 16:53:55.006809   12816 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 16:53:55.007075   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:53:55.007821   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json: {Name:mkc2f9747189bfa0db5ea21e93e1afafc0e89eb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:53:55.008149   12816 start.go:360] acquireMachinesLock for ha-022600: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:53:55.009151   12816 start.go:364] duration metric: took 1.0024ms to acquireMachinesLock for "ha-022600"
	I0416 16:53:55.009151   12816 start.go:93] Provisioning new machine with config: &{Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 16:53:55.009151   12816 start.go:125] createHost starting for "" (driver="hyperv")
	I0416 16:53:55.010175   12816 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 16:53:55.010397   12816 start.go:159] libmachine.API.Create for "ha-022600" (driver="hyperv")
	I0416 16:53:55.010397   12816 client.go:168] LocalClient.Create starting
	I0416 16:53:55.010740   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0416 16:53:55.011023   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:53:55.011023   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:53:55.011200   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0416 16:53:55.011403   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:53:55.011403   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:53:55.011541   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0416 16:53:56.852843   12816 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0416 16:53:56.852843   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:53:56.853713   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0416 16:53:58.346838   12816 main.go:141] libmachine: [stdout =====>] : False
	
	I0416 16:53:58.346838   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:53:58.347399   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:53:59.667129   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:53:59.667129   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:53:59.667644   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:54:02.789332   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:54:02.789332   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:02.791736   12816 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 16:54:03.131710   12816 main.go:141] libmachine: Creating SSH key...
	I0416 16:54:03.273248   12816 main.go:141] libmachine: Creating VM...
	I0416 16:54:03.273248   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:54:05.824835   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:54:05.824835   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:05.824937   12816 main.go:141] libmachine: Using switch "Default Switch"
	I0416 16:54:05.825022   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:54:07.398351   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:54:07.398635   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:07.398635   12816 main.go:141] libmachine: Creating VHD
	I0416 16:54:07.398733   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0416 16:54:10.982944   12816 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : E9EB5342-E929-43B6-8B97-D7BDD354CEE1
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0416 16:54:10.983213   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:10.983213   12816 main.go:141] libmachine: Writing magic tar header
	I0416 16:54:10.983213   12816 main.go:141] libmachine: Writing SSH key tar header
	I0416 16:54:10.992883   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0416 16:54:13.950584   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:13.950584   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:13.950584   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\disk.vhd' -SizeBytes 20000MB
	I0416 16:54:16.287736   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:16.287736   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:16.287736   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-022600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0416 16:54:19.439740   12816 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-022600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0416 16:54:19.439740   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:19.439740   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-022600 -DynamicMemoryEnabled $false
	I0416 16:54:21.396684   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:21.396684   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:21.397696   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-022600 -Count 2
	I0416 16:54:23.301369   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:23.301369   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:23.302296   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-022600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\boot2docker.iso'
	I0416 16:54:25.540957   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:25.540957   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:25.541060   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-022600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\disk.vhd'
	I0416 16:54:27.919093   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:27.919302   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:27.919302   12816 main.go:141] libmachine: Starting VM...
	I0416 16:54:27.919462   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-022600
	I0416 16:54:30.480037   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:30.480279   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:30.480279   12816 main.go:141] libmachine: Waiting for host to start...
	I0416 16:54:30.480279   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:32.483346   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:32.484152   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:32.484152   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:34.785082   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:34.785271   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:35.799483   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:37.788691   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:37.788691   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:37.788898   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:40.058231   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:40.058742   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:41.064074   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:43.063862   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:43.063862   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:43.063862   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:45.301253   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:45.301420   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:46.309647   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:48.337653   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:48.337653   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:48.337653   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:50.614494   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:50.615195   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:51.620909   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:53.639317   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:53.640351   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:53.640405   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:55.942630   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:54:55.943393   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:55.943471   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:57.836545   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:57.836545   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:57.837395   12816 machine.go:94] provisionDockerMachine start ...
	I0416 16:54:57.837474   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:59.762683   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:59.763360   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:59.763440   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:02.003751   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:02.003751   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:02.010689   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:02.023158   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:02.023158   12816 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 16:55:02.152140   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 16:55:02.152244   12816 buildroot.go:166] provisioning hostname "ha-022600"
	I0416 16:55:02.152322   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:03.956913   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:03.956913   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:03.957618   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:06.305236   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:06.305236   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:06.309822   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:06.310484   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:06.310484   12816 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-022600 && echo "ha-022600" | sudo tee /etc/hostname
	I0416 16:55:06.479074   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-022600
	
	I0416 16:55:06.479182   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:08.433073   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:08.433999   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:08.433999   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:10.792893   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:10.792893   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:10.796713   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:10.797321   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:10.797321   12816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-022600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-022600/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-022600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:55:10.944702   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:55:10.944870   12816 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 16:55:10.944983   12816 buildroot.go:174] setting up certificates
	I0416 16:55:10.944983   12816 provision.go:84] configureAuth start
	I0416 16:55:10.945092   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:12.932736   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:12.932736   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:12.933614   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:15.203758   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:15.203758   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:15.203758   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:17.088226   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:17.088226   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:17.088334   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:19.325791   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:19.326294   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:19.326294   12816 provision.go:143] copyHostCerts
	I0416 16:55:19.326294   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 16:55:19.326294   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 16:55:19.326294   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 16:55:19.326900   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 16:55:19.328097   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 16:55:19.328097   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 16:55:19.328097   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 16:55:19.328097   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 16:55:19.329417   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 16:55:19.329417   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 16:55:19.329417   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 16:55:19.330063   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 16:55:19.330726   12816 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-022600 san=[127.0.0.1 172.19.81.207 ha-022600 localhost minikube]
	I0416 16:55:19.539117   12816 provision.go:177] copyRemoteCerts
	I0416 16:55:19.547114   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:55:19.547114   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:21.440985   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:21.440985   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:21.440985   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:23.726564   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:23.726564   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:23.727019   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:55:23.834423   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.287066s)
	I0416 16:55:23.834577   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 16:55:23.835008   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 16:55:23.874966   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 16:55:23.875470   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0416 16:55:23.923921   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 16:55:23.923921   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 16:55:23.965042   12816 provision.go:87] duration metric: took 13.0192422s to configureAuth
	I0416 16:55:23.965042   12816 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:55:23.965741   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:55:23.965827   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:25.905339   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:25.905903   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:25.905986   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:28.170079   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:28.170419   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:28.173356   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:28.173937   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:28.173937   12816 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 16:55:28.301727   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 16:55:28.301727   12816 buildroot.go:70] root file system type: tmpfs
	I0416 16:55:28.302425   12816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 16:55:28.302506   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:30.181808   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:30.181808   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:30.181889   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:32.394860   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:32.394860   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:32.398667   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:32.399299   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:32.399475   12816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 16:55:32.556658   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 16:55:32.556887   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:34.446928   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:34.446969   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:34.447053   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:36.709442   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:36.710242   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:36.714111   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:36.714437   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:36.714437   12816 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 16:55:38.655929   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 16:55:38.655929   12816 machine.go:97] duration metric: took 40.8162201s to provisionDockerMachine
	I0416 16:55:38.656036   12816 client.go:171] duration metric: took 1m43.6397622s to LocalClient.Create
	I0416 16:55:38.656036   12816 start.go:167] duration metric: took 1m43.6397622s to libmachine.API.Create "ha-022600"
	I0416 16:55:38.656036   12816 start.go:293] postStartSetup for "ha-022600" (driver="hyperv")
	I0416 16:55:38.656036   12816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:55:38.665072   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:55:38.665072   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:40.514910   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:40.514910   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:40.515910   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:42.764754   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:42.765404   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:42.765404   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:55:42.879399   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2140881s)
	I0416 16:55:42.892410   12816 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:55:42.899117   12816 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:55:42.899117   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 16:55:42.899734   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 16:55:42.901086   12816 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 16:55:42.901154   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 16:55:42.911237   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 16:55:42.927664   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 16:55:42.975440   12816 start.go:296] duration metric: took 4.3191592s for postStartSetup
	I0416 16:55:42.977201   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:44.830945   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:44.830945   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:44.831562   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:47.134349   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:47.134788   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:47.135000   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:55:47.137270   12816 start.go:128] duration metric: took 1m52.1217609s to createHost
	I0416 16:55:47.137270   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:49.024055   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:49.024055   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:49.024657   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:51.238446   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:51.238526   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:51.242455   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:51.243052   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:51.243052   12816 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 16:55:51.369469   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713286551.540248133
	
	I0416 16:55:51.369575   12816 fix.go:216] guest clock: 1713286551.540248133
	I0416 16:55:51.369575   12816 fix.go:229] Guest: 2024-04-16 16:55:51.540248133 +0000 UTC Remote: 2024-04-16 16:55:47.1372703 +0000 UTC m=+117.146546101 (delta=4.402977833s)
	I0416 16:55:51.369790   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:53.407581   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:53.407727   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:53.407727   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:55.663769   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:55.663769   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:55.667543   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:55.667688   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:55.667688   12816 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713286551
	I0416 16:55:55.810591   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 16:55:51 UTC 2024
	
	I0416 16:55:55.810700   12816 fix.go:236] clock set: Tue Apr 16 16:55:51 UTC 2024
	 (err=<nil>)
	I0416 16:55:55.810700   12816 start.go:83] releasing machines lock for "ha-022600", held for 2m0.7946995s
	I0416 16:55:55.810965   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:57.710878   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:57.710878   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:57.711672   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:59.985139   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:59.985210   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:59.988730   12816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:55:59.988803   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:59.998550   12816 ssh_runner.go:195] Run: cat /version.json
	I0416 16:55:59.998550   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:01.993954   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:01.993954   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:01.993954   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:01.995788   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:01.995959   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:01.996084   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:04.379274   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:56:04.379356   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:04.379701   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:56:04.391360   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:56:04.392161   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:04.392520   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:56:04.469159   12816 ssh_runner.go:235] Completed: cat /version.json: (4.4703555s)
	I0416 16:56:04.479363   12816 ssh_runner.go:195] Run: systemctl --version
	I0416 16:56:04.584079   12816 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5950892s)
	I0416 16:56:04.593130   12816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 16:56:04.602217   12816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:56:04.610705   12816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:56:04.639084   12816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 16:56:04.639119   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:56:04.639119   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:56:04.684127   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 16:56:04.713899   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 16:56:04.734297   12816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 16:56:04.745020   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 16:56:04.776657   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:56:04.806087   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 16:56:04.854166   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:56:04.890388   12816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:56:04.918140   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 16:56:04.946595   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 16:56:04.975408   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 16:56:05.001633   12816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:56:05.028505   12816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:56:05.053299   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:05.230466   12816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 16:56:05.260161   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:56:05.269988   12816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 16:56:05.302694   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:56:05.335619   12816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 16:56:05.368663   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:56:05.402792   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:56:05.435612   12816 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 16:56:05.483431   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:56:05.505797   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:56:05.548843   12816 ssh_runner.go:195] Run: which cri-dockerd
	I0416 16:56:05.563980   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 16:56:05.582552   12816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 16:56:05.624048   12816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 16:56:05.804495   12816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 16:56:05.984936   12816 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 16:56:05.985183   12816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 16:56:06.032244   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:06.217075   12816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 16:56:08.662995   12816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4457805s)
	I0416 16:56:08.670977   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0416 16:56:08.701542   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 16:56:08.730698   12816 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0416 16:56:08.941813   12816 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0416 16:56:09.145939   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:09.331138   12816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0416 16:56:09.370232   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 16:56:09.409657   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:09.615575   12816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0416 16:56:09.726879   12816 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0416 16:56:09.737760   12816 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0416 16:56:09.746450   12816 start.go:562] Will wait 60s for crictl version
	I0416 16:56:09.755840   12816 ssh_runner.go:195] Run: which crictl
	I0416 16:56:09.771470   12816 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 16:56:09.827603   12816 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0416 16:56:09.836477   12816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 16:56:09.874967   12816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 16:56:09.907967   12816 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
	I0416 16:56:09.908249   12816 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:6f:a4 Flags:up|broadcast|multicast|running}
	I0416 16:56:09.917049   12816 ip.go:210] interface addr: fe80::6b96:eca7:5afa:def5/64
	I0416 16:56:09.917049   12816 ip.go:210] interface addr: 172.19.80.1/20
	I0416 16:56:09.924842   12816 ssh_runner.go:195] Run: grep 172.19.80.1	host.minikube.internal$ /etc/hosts
	I0416 16:56:09.931830   12816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:56:09.968931   12816 kubeadm.go:877] updating cluster {Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 16:56:09.968931   12816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:56:09.975955   12816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 16:56:09.996899   12816 docker.go:685] Got preloaded images: 
	I0416 16:56:09.996899   12816 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0416 16:56:10.008276   12816 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0416 16:56:10.035609   12816 ssh_runner.go:195] Run: which lz4
	I0416 16:56:10.042582   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0416 16:56:10.050849   12816 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 16:56:10.058074   12816 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 16:56:10.058074   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0416 16:56:11.721910   12816 docker.go:649] duration metric: took 1.6789563s to copy over tarball
	I0416 16:56:11.731181   12816 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 16:56:20.333529   12816 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.60186s)
	I0416 16:56:20.333529   12816 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 16:56:20.400516   12816 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0416 16:56:20.419486   12816 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0416 16:56:20.469018   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:20.655543   12816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 16:56:23.229259   12816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5734984s)
	I0416 16:56:23.240705   12816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 16:56:23.262332   12816 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0416 16:56:23.262383   12816 cache_images.go:84] Images are preloaded, skipping loading
	I0416 16:56:23.262383   12816 kubeadm.go:928] updating node { 172.19.81.207 8443 v1.29.3 docker true true} ...
	I0416 16:56:23.262383   12816 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-022600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.81.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 16:56:23.270008   12816 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0416 16:56:23.307277   12816 cni.go:84] Creating CNI manager for ""
	I0416 16:56:23.307277   12816 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0416 16:56:23.307362   12816 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 16:56:23.307406   12816 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.81.207 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-022600 NodeName:ha-022600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.81.207"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.81.207 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 16:56:23.307691   12816 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.81.207
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-022600"
	  kubeletExtraArgs:
	    node-ip: 172.19.81.207
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.81.207"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 16:56:23.307749   12816 kube-vip.go:111] generating kube-vip config ...
	I0416 16:56:23.318492   12816 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0416 16:56:23.343950   12816 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0416 16:56:23.344258   12816 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.95.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0416 16:56:23.353585   12816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 16:56:23.370542   12816 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 16:56:23.379813   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0416 16:56:23.397865   12816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0416 16:56:23.432291   12816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 16:56:23.462868   12816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0416 16:56:23.492579   12816 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1351 bytes)
	I0416 16:56:23.534977   12816 ssh_runner.go:195] Run: grep 172.19.95.254	control-plane.minikube.internal$ /etc/hosts
	I0416 16:56:23.542734   12816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.95.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:56:23.575719   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:23.754395   12816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 16:56:23.781462   12816 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600 for IP: 172.19.81.207
	I0416 16:56:23.781462   12816 certs.go:194] generating shared ca certs ...
	I0416 16:56:23.781462   12816 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:23.782411   12816 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0416 16:56:23.782411   12816 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0416 16:56:23.783651   12816 certs.go:256] generating profile certs ...
	I0416 16:56:23.784402   12816 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.key
	I0416 16:56:23.784569   12816 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.crt with IP's: []
	I0416 16:56:23.984047   12816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.crt ...
	I0416 16:56:23.984047   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.crt: {Name:mk3ebdcb7f076a09a399313f7ed3edf14403a6a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:23.985977   12816 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.key ...
	I0416 16:56:23.985977   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.key: {Name:mk94343a485b04f4b25a0ccd3245e197e7ecbec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:23.986215   12816 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648
	I0416 16:56:23.987265   12816 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.81.207 172.19.95.254]
	I0416 16:56:24.317716   12816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648 ...
	I0416 16:56:24.317716   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648: {Name:mk30f7000427979a1bcf8d6fc3995d1f7ccc170c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.319797   12816 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648 ...
	I0416 16:56:24.319797   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648: {Name:mk95e9e3e0f96031ef005f6c36470c216303a111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.320163   12816 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt
	I0416 16:56:24.331288   12816 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key
	I0416 16:56:24.332214   12816 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key
	I0416 16:56:24.332214   12816 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt with IP's: []
	I0416 16:56:24.406574   12816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt ...
	I0416 16:56:24.406574   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt: {Name:mk73158a02cd8861e90a3b76d50704b360c358ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.407013   12816 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key ...
	I0416 16:56:24.407013   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key: {Name:mk6842e2af8fadaf278ec7592edd5bd96f07c8ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.408078   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 16:56:24.408945   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0416 16:56:24.409732   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0416 16:56:24.417870   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0416 16:56:24.418145   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem (1338 bytes)
	W0416 16:56:24.418533   12816 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324_empty.pem, impossibly tiny 0 bytes
	I0416 16:56:24.418533   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0416 16:56:24.418811   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0416 16:56:24.418990   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0416 16:56:24.419161   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0416 16:56:24.419368   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem (1708 bytes)
	I0416 16:56:24.419647   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /usr/share/ca-certificates/53242.pem
	I0416 16:56:24.419767   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:24.419867   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem -> /usr/share/ca-certificates/5324.pem
	I0416 16:56:24.420003   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 16:56:24.466985   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 16:56:24.509816   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 16:56:24.554817   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 16:56:24.603006   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0416 16:56:24.646596   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 16:56:24.694120   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 16:56:24.741669   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 16:56:24.785888   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /usr/share/ca-certificates/53242.pem (1708 bytes)
	I0416 16:56:24.829403   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 16:56:24.891821   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem --> /usr/share/ca-certificates/5324.pem (1338 bytes)
	I0416 16:56:24.933883   12816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 16:56:24.975091   12816 ssh_runner.go:195] Run: openssl version
	I0416 16:56:24.994129   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/53242.pem && ln -fs /usr/share/ca-certificates/53242.pem /etc/ssl/certs/53242.pem"
	I0416 16:56:25.021821   12816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/53242.pem
	I0416 16:56:25.028512   12816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:35 /usr/share/ca-certificates/53242.pem
	I0416 16:56:25.037989   12816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/53242.pem
	I0416 16:56:25.054924   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/53242.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 16:56:25.080011   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 16:56:25.106815   12816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:25.113980   12816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:25.126339   12816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:25.144599   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 16:56:25.170309   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5324.pem && ln -fs /usr/share/ca-certificates/5324.pem /etc/ssl/certs/5324.pem"
	I0416 16:56:25.199080   12816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5324.pem
	I0416 16:56:25.206080   12816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:35 /usr/share/ca-certificates/5324.pem
	I0416 16:56:25.214031   12816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5324.pem
	I0416 16:56:25.237026   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5324.pem /etc/ssl/certs/51391683.0"
	I0416 16:56:25.266837   12816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 16:56:25.273408   12816 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 16:56:25.273858   12816 kubeadm.go:391] StartCluster: {Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clu
sterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:56:25.281991   12816 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0416 16:56:25.314891   12816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0416 16:56:25.342248   12816 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 16:56:25.368032   12816 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 16:56:25.385737   12816 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 16:56:25.385737   12816 kubeadm.go:156] found existing configuration files:
	
	I0416 16:56:25.393851   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 16:56:25.410393   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 16:56:25.421874   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 16:56:25.453762   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 16:56:25.468769   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 16:56:25.477353   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 16:56:25.501898   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 16:56:25.515888   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 16:56:25.524885   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 16:56:25.548518   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 16:56:25.563660   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 16:56:25.572269   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 16:56:25.587981   12816 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 16:56:25.791977   12816 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 16:56:25.791977   12816 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 16:56:25.958638   12816 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 16:56:25.959035   12816 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 16:56:25.959403   12816 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 16:56:26.228464   12816 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 16:56:26.229544   12816 out.go:204]   - Generating certificates and keys ...
	I0416 16:56:26.229862   12816 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 16:56:26.230882   12816 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 16:56:26.359024   12816 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0416 16:56:26.583044   12816 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0416 16:56:26.715543   12816 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0416 16:56:27.014892   12816 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0416 16:56:27.414264   12816 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0416 16:56:27.414467   12816 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-022600 localhost] and IPs [172.19.81.207 127.0.0.1 ::1]
	I0416 16:56:27.642396   12816 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0416 16:56:27.642770   12816 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-022600 localhost] and IPs [172.19.81.207 127.0.0.1 ::1]
	I0416 16:56:27.844566   12816 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0416 16:56:28.089475   12816 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0416 16:56:28.543900   12816 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0416 16:56:28.548586   12816 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 16:56:29.051829   12816 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 16:56:29.485679   12816 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 16:56:29.830737   12816 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 16:56:30.055972   12816 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 16:56:30.305118   12816 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 16:56:30.305118   12816 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 16:56:30.310446   12816 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 16:56:30.311113   12816 out.go:204]   - Booting up control plane ...
	I0416 16:56:30.311289   12816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 16:56:30.311970   12816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 16:56:30.317049   12816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 16:56:30.342443   12816 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 16:56:30.345140   12816 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 16:56:30.345140   12816 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 16:56:30.526725   12816 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 16:56:37.142045   12816 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.615653 seconds
	I0416 16:56:37.159025   12816 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 16:56:37.175108   12816 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 16:56:37.707867   12816 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 16:56:37.708715   12816 kubeadm.go:309] [mark-control-plane] Marking the node ha-022600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 16:56:38.222729   12816 kubeadm.go:309] [bootstrap-token] Using token: a3r5qn.ikva200bfcppykg5
	I0416 16:56:38.223819   12816 out.go:204]   - Configuring RBAC rules ...
	I0416 16:56:38.224231   12816 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 16:56:38.232416   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 16:56:38.244982   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 16:56:38.249926   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 16:56:38.257723   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 16:56:38.262029   12816 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 16:56:38.279883   12816 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 16:56:38.592701   12816 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 16:56:38.638273   12816 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 16:56:38.639572   12816 kubeadm.go:309] 
	I0416 16:56:38.640154   12816 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 16:56:38.640230   12816 kubeadm.go:309] 
	I0416 16:56:38.640982   12816 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 16:56:38.641038   12816 kubeadm.go:309] 
	I0416 16:56:38.641299   12816 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 16:56:38.641581   12816 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 16:56:38.641765   12816 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 16:56:38.641765   12816 kubeadm.go:309] 
	I0416 16:56:38.641989   12816 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 16:56:38.642031   12816 kubeadm.go:309] 
	I0416 16:56:38.642184   12816 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 16:56:38.642228   12816 kubeadm.go:309] 
	I0416 16:56:38.642350   12816 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 16:56:38.642660   12816 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 16:56:38.642862   12816 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 16:56:38.642900   12816 kubeadm.go:309] 
	I0416 16:56:38.643166   12816 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 16:56:38.643426   12816 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 16:56:38.643426   12816 kubeadm.go:309] 
	I0416 16:56:38.643613   12816 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token a3r5qn.ikva200bfcppykg5 \
	I0416 16:56:38.643867   12816 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c \
	I0416 16:56:38.643909   12816 kubeadm.go:309] 	--control-plane 
	I0416 16:56:38.643961   12816 kubeadm.go:309] 
	I0416 16:56:38.644233   12816 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 16:56:38.644272   12816 kubeadm.go:309] 
	I0416 16:56:38.644444   12816 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token a3r5qn.ikva200bfcppykg5 \
	I0416 16:56:38.644734   12816 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c 
	I0416 16:56:38.647455   12816 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 16:56:38.647488   12816 cni.go:84] Creating CNI manager for ""
	I0416 16:56:38.647539   12816 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0416 16:56:38.648246   12816 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0416 16:56:38.657141   12816 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0416 16:56:38.671263   12816 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0416 16:56:38.671263   12816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0416 16:56:38.722410   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0416 16:56:39.265655   12816 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 16:56:39.279279   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:39.279279   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-022600 minikube.k8s.io/updated_at=2024_04_16T16_56_39_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4 minikube.k8s.io/name=ha-022600 minikube.k8s.io/primary=true
	I0416 16:56:39.290244   12816 ops.go:34] apiserver oom_adj: -16
	I0416 16:56:39.441163   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:39.950155   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:40.453751   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:40.955147   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:41.455931   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:41.953044   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:42.454696   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:42.949299   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:43.454962   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:43.953447   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:44.456402   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:44.956686   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:45.449476   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:45.951602   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:46.451988   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:46.949212   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:47.449356   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:47.950703   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:48.458777   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:48.956811   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:49.456669   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:49.943595   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:50.443906   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:50.950503   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:51.454863   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:51.944285   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:52.083562   12816 kubeadm.go:1107] duration metric: took 12.8170858s to wait for elevateKubeSystemPrivileges
	W0416 16:56:52.083816   12816 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 16:56:52.083816   12816 kubeadm.go:393] duration metric: took 26.808438s to StartCluster
	I0416 16:56:52.083816   12816 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:52.084214   12816 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:56:52.086643   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:52.088384   12816 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 16:56:52.088384   12816 start.go:240] waiting for startup goroutines ...
	I0416 16:56:52.088384   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0416 16:56:52.088384   12816 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 16:56:52.088630   12816 addons.go:69] Setting storage-provisioner=true in profile "ha-022600"
	I0416 16:56:52.088732   12816 addons.go:234] Setting addon storage-provisioner=true in "ha-022600"
	I0416 16:56:52.088732   12816 addons.go:69] Setting default-storageclass=true in profile "ha-022600"
	I0416 16:56:52.088850   12816 host.go:66] Checking if "ha-022600" exists ...
	I0416 16:56:52.088964   12816 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-022600"
	I0416 16:56:52.088964   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:56:52.090289   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:52.090671   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:52.207597   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.80.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0416 16:56:52.469504   12816 start.go:946] {"host.minikube.internal": 172.19.80.1} host record injected into CoreDNS's ConfigMap
	I0416 16:56:54.164683   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:54.164683   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:54.165583   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:54.165635   12816 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:56:54.165635   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:54.166734   12816 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 16:56:54.166340   12816 kapi.go:59] client config for ha-022600: &rest.Config{Host:"https://172.19.95.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-022600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-022600\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef16c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 16:56:54.167133   12816 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 16:56:54.167133   12816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 16:56:54.167133   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:54.167791   12816 cert_rotation.go:137] Starting client certificate rotation controller
	I0416 16:56:54.168180   12816 addons.go:234] Setting addon default-storageclass=true in "ha-022600"
	I0416 16:56:54.168347   12816 host.go:66] Checking if "ha-022600" exists ...
	I0416 16:56:54.169251   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:56.312581   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:56.312988   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:56.313046   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:56.313270   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:56.313270   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:56.313270   12816 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 16:56:56.313270   12816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 16:56:56.313270   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:58.330392   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:58.330392   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:58.330966   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:58.735727   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:56:58.735876   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:58.736103   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:56:58.898469   12816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 16:57:00.675802   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:57:00.675802   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:00.676245   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:57:00.828151   12816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 16:57:01.248041   12816 round_trippers.go:463] GET https://172.19.95.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0416 16:57:01.248041   12816 round_trippers.go:469] Request Headers:
	I0416 16:57:01.248041   12816 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:57:01.248041   12816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:57:01.261890   12816 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0416 16:57:01.262478   12816 round_trippers.go:463] PUT https://172.19.95.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0416 16:57:01.262478   12816 round_trippers.go:469] Request Headers:
	I0416 16:57:01.262478   12816 round_trippers.go:473]     Content-Type: application/json
	I0416 16:57:01.262478   12816 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:57:01.262478   12816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:57:01.268964   12816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 16:57:01.269995   12816 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0416 16:57:01.270495   12816 addons.go:505] duration metric: took 9.181591s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0416 16:57:01.270576   12816 start.go:245] waiting for cluster config update ...
	I0416 16:57:01.270618   12816 start.go:254] writing updated cluster config ...
	I0416 16:57:01.271859   12816 out.go:177] 
	I0416 16:57:01.284169   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:57:01.284169   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:57:01.285951   12816 out.go:177] * Starting "ha-022600-m02" control-plane node in "ha-022600" cluster
	I0416 16:57:01.286952   12816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:57:01.286952   12816 cache.go:56] Caching tarball of preloaded images
	I0416 16:57:01.286952   12816 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 16:57:01.286952   12816 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 16:57:01.286952   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:57:01.296247   12816 start.go:360] acquireMachinesLock for ha-022600-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:57:01.297324   12816 start.go:364] duration metric: took 1.0773ms to acquireMachinesLock for "ha-022600-m02"
	I0416 16:57:01.297559   12816 start.go:93] Provisioning new machine with config: &{Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 16:57:01.297559   12816 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0416 16:57:01.297559   12816 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 16:57:01.297559   12816 start.go:159] libmachine.API.Create for "ha-022600" (driver="hyperv")
	I0416 16:57:01.297559   12816 client.go:168] LocalClient.Create starting
	I0416 16:57:01.298838   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0416 16:57:01.299147   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:57:01.299147   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:57:01.299293   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0416 16:57:01.299468   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:57:01.299468   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:57:01.299468   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0416 16:57:03.017072   12816 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0416 16:57:03.017279   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:03.017366   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0416 16:57:04.580895   12816 main.go:141] libmachine: [stdout =====>] : False
	
	I0416 16:57:04.580895   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:04.580895   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:57:05.984295   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:57:05.984295   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:05.984295   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:57:09.314760   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:57:09.314760   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:09.316740   12816 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 16:57:09.669552   12816 main.go:141] libmachine: Creating SSH key...
	I0416 16:57:10.010472   12816 main.go:141] libmachine: Creating VM...
	I0416 16:57:10.010472   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:57:12.690022   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:57:12.690022   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:12.690107   12816 main.go:141] libmachine: Using switch "Default Switch"
	I0416 16:57:12.690185   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:57:14.267157   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:57:14.267157   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:14.267157   12816 main.go:141] libmachine: Creating VHD
	I0416 16:57:14.267157   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0416 16:57:17.749511   12816 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : FE960248-03C1-43D6-B7AE-A60D4C86308B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0416 16:57:17.749511   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:17.749511   12816 main.go:141] libmachine: Writing magic tar header
	I0416 16:57:17.749511   12816 main.go:141] libmachine: Writing SSH key tar header
	I0416 16:57:17.758158   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0416 16:57:20.709379   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:20.709950   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:20.710019   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\disk.vhd' -SizeBytes 20000MB
	I0416 16:57:23.025729   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:23.025729   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:23.025729   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-022600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0416 16:57:26.131923   12816 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-022600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0416 16:57:26.131923   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:26.131923   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-022600-m02 -DynamicMemoryEnabled $false
	I0416 16:57:28.159153   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:28.159229   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:28.159409   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-022600-m02 -Count 2
	I0416 16:57:30.126033   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:30.126033   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:30.126033   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-022600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\boot2docker.iso'
	I0416 16:57:32.420739   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:32.421735   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:32.421878   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-022600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\disk.vhd'
	I0416 16:57:34.779822   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:34.779822   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:34.779822   12816 main.go:141] libmachine: Starting VM...
	I0416 16:57:34.780971   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-022600-m02
	I0416 16:57:37.369505   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:37.369687   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:37.369687   12816 main.go:141] libmachine: Waiting for host to start...
	I0416 16:57:37.369767   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:39.415029   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:39.415029   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:39.415286   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:41.685132   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:41.685132   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:42.700464   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:44.674039   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:44.674039   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:44.674039   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:46.993492   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:46.993492   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:48.000886   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:49.992438   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:49.992894   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:49.992930   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:52.274971   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:52.274971   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:53.290891   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:55.287716   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:55.287962   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:55.288037   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:57.564053   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:57.564053   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:58.572803   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:00.584542   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:00.584542   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:00.584542   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:02.905327   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:02.905327   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:02.905391   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:04.899133   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:04.899133   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:04.899479   12816 machine.go:94] provisionDockerMachine start ...
	I0416 16:58:04.899479   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:06.914221   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:06.914221   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:06.914869   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:09.273511   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:09.273546   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:09.277783   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:09.278406   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:09.278406   12816 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 16:58:09.413281   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 16:58:09.413281   12816 buildroot.go:166] provisioning hostname "ha-022600-m02"
	I0416 16:58:09.413281   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:11.438626   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:11.438626   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:11.439079   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:13.801181   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:13.801181   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:13.805295   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:13.805684   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:13.805684   12816 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-022600-m02 && echo "ha-022600-m02" | sudo tee /etc/hostname
	I0416 16:58:13.957933   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-022600-m02
	
	I0416 16:58:13.958021   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:15.863768   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:15.863768   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:15.863768   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:18.176996   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:18.178002   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:18.182057   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:18.182681   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:18.182681   12816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-022600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-022600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-022600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:58:18.315751   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:58:18.315853   12816 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 16:58:18.315853   12816 buildroot.go:174] setting up certificates
	I0416 16:58:18.315853   12816 provision.go:84] configureAuth start
	I0416 16:58:18.315853   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:20.243862   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:20.243928   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:20.243928   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:22.525833   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:22.525945   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:22.526057   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:24.418671   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:24.418671   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:24.418894   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:26.735560   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:26.735560   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:26.735560   12816 provision.go:143] copyHostCerts
	I0416 16:58:26.736546   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 16:58:26.736627   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 16:58:26.736627   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 16:58:26.736627   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 16:58:26.737290   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 16:58:26.737900   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 16:58:26.737900   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 16:58:26.738191   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 16:58:26.738908   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 16:58:26.738977   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 16:58:26.738977   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 16:58:26.738977   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 16:58:26.739652   12816 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-022600-m02 san=[127.0.0.1 172.19.80.125 ha-022600-m02 localhost minikube]
	I0416 16:58:26.917277   12816 provision.go:177] copyRemoteCerts
	I0416 16:58:26.926308   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:58:26.926600   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:28.829360   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:28.829360   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:28.830343   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:31.113681   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:31.113681   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:31.113681   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:58:31.229222   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3026703s)
	I0416 16:58:31.229222   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 16:58:31.229700   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 16:58:31.279666   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 16:58:31.280307   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0416 16:58:31.328101   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 16:58:31.328245   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 16:58:31.382563   12816 provision.go:87] duration metric: took 13.065969s to configureAuth
	I0416 16:58:31.382563   12816 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:58:31.383343   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:58:31.383343   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:33.331199   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:33.331199   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:33.331275   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:35.653673   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:35.653721   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:35.656855   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:35.657430   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:35.657430   12816 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 16:58:35.781565   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 16:58:35.781565   12816 buildroot.go:70] root file system type: tmpfs
	I0416 16:58:35.781565   12816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 16:58:35.782090   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:37.695478   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:37.695478   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:37.696344   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:39.956169   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:39.956169   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:39.961057   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:39.961515   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:39.961564   12816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.81.207"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 16:58:40.123664   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.81.207
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 16:58:40.123818   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:42.064878   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:42.064974   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:42.065152   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:44.326252   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:44.326252   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:44.330103   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:44.330731   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:44.330731   12816 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 16:58:46.283136   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 16:58:46.283253   12816 machine.go:97] duration metric: took 41.3814279s to provisionDockerMachine
	I0416 16:58:46.283253   12816 client.go:171] duration metric: took 1m44.9797412s to LocalClient.Create
	I0416 16:58:46.283253   12816 start.go:167] duration metric: took 1m44.9797412s to libmachine.API.Create "ha-022600"
	I0416 16:58:46.283253   12816 start.go:293] postStartSetup for "ha-022600-m02" (driver="hyperv")
	I0416 16:58:46.283345   12816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:58:46.292724   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:58:46.292724   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:48.207625   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:48.207625   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:48.207625   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:50.480821   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:50.480821   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:50.480821   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:58:50.575284   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2823171s)
	I0416 16:58:50.584260   12816 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:58:50.591292   12816 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:58:50.591292   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 16:58:50.591292   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 16:58:50.591900   12816 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 16:58:50.591900   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 16:58:50.601073   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 16:58:50.618807   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 16:58:50.671301   12816 start.go:296] duration metric: took 4.3877068s for postStartSetup
	I0416 16:58:50.673161   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:52.621684   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:52.622252   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:52.622252   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:54.923435   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:54.923435   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:54.923763   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:58:54.926483   12816 start.go:128] duration metric: took 1m53.622481s to createHost
	I0416 16:58:54.926657   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:56.793105   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:56.793105   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:56.793184   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:59.024255   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:59.025184   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:59.029108   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:59.029633   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:59.029730   12816 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 16:58:59.149333   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713286739.315259098
	
	I0416 16:58:59.149333   12816 fix.go:216] guest clock: 1713286739.315259098
	I0416 16:58:59.149333   12816 fix.go:229] Guest: 2024-04-16 16:58:59.315259098 +0000 UTC Remote: 2024-04-16 16:58:54.9265716 +0000 UTC m=+304.925199701 (delta=4.388687498s)
	I0416 16:58:59.149333   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:01.054656   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:01.054831   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:01.054831   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:03.303195   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:03.303195   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:03.307071   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:59:03.307459   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:59:03.307531   12816 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713286739
	I0416 16:59:03.449024   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 16:58:59 UTC 2024
	
	I0416 16:59:03.449024   12816 fix.go:236] clock set: Tue Apr 16 16:58:59 UTC 2024
	 (err=<nil>)
	I0416 16:59:03.449024   12816 start.go:83] releasing machines lock for "ha-022600-m02", held for 2m2.1447745s
	I0416 16:59:03.450039   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:05.434998   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:05.434998   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:05.434998   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:07.737918   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:07.737918   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:07.739042   12816 out.go:177] * Found network options:
	I0416 16:59:07.739784   12816 out.go:177]   - NO_PROXY=172.19.81.207
	W0416 16:59:07.740404   12816 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 16:59:07.741027   12816 out.go:177]   - NO_PROXY=172.19.81.207
	W0416 16:59:07.741505   12816 proxy.go:119] fail to check proxy env: Error ip not in block
	W0416 16:59:07.742708   12816 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 16:59:07.744988   12816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:59:07.745153   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:07.752817   12816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0416 16:59:07.752817   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:09.758953   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:12.157582   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:12.158536   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:12.159044   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:59:12.184719   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:12.184719   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:12.185179   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:59:12.257436   12816 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5043642s)
	W0416 16:59:12.257436   12816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:59:12.266545   12816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:59:12.367206   12816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 16:59:12.367296   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:59:12.367330   12816 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6219642s)
	I0416 16:59:12.367330   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:59:12.423201   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 16:59:12.453988   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 16:59:12.472992   12816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 16:59:12.482991   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 16:59:12.510864   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:59:12.538866   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 16:59:12.565866   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:59:12.597751   12816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:59:12.622761   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 16:59:12.648905   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 16:59:12.674904   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 16:59:12.713452   12816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:59:12.741495   12816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:59:12.768497   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:59:12.975524   12816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 16:59:13.011635   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:59:13.023647   12816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 16:59:13.058146   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:59:13.091991   12816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 16:59:13.139058   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:59:13.173081   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:59:13.208242   12816 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 16:59:13.259511   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:59:13.282094   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:59:13.329081   12816 ssh_runner.go:195] Run: which cri-dockerd
	I0416 16:59:13.344832   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 16:59:13.362131   12816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 16:59:13.403377   12816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 16:59:13.597444   12816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 16:59:13.768147   12816 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 16:59:13.768278   12816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 16:59:13.808294   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:59:13.987216   12816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 17:00:15.104612   12816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1138396s)
	I0416 17:00:15.115049   12816 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0416 17:00:15.145752   12816 out.go:177] 
	W0416 17:00:15.146473   12816 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 16 16:58:45 ha-022600-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.076842920Z" level=info msg="Starting up"
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.077687177Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.078706068Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.109665355Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138411128Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138448735Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138508447Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138523049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138600164Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138632670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138848110Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138955930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139030244Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139045347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139142365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139433520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142495192Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142588309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142778845Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142795748Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143044695Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143174419Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143191422Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.152862930Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153144583Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153313214Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153337519Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153354522Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153467543Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153957434Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154159572Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154195179Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154212082Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154230586Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154258491Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154272393Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154287696Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154303599Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154317302Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154330504Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154344107Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154373612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154392516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154406618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154421121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154434024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154447526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154460128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154474031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154498536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154514539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154525841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154555046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154568249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154583952Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154604755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154629960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154642062Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154700973Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154916114Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155014532Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155030135Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155203567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155302486Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155325090Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155706861Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155796078Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155907599Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155947306Z" level=info msg="containerd successfully booted in 0.047582s"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.119001526Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.129323458Z" level=info msg="Loading containers: start."
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.358382320Z" level=info msg="Loading containers: done."
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.377033580Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.377149301Z" level=info msg="Daemon has completed initialization"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.447556885Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 16:58:46 ha-022600-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.449134569Z" level=info msg="API listen on [::]:2376"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.178053148Z" level=info msg="Processing signal 'terminated'"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.179830517Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 16:59:14 ha-022600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.180814055Z" level=info msg="Daemon shutdown complete"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.181020363Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.181054564Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 16:59:15 ha-022600-m02 dockerd[1019]: time="2024-04-16T16:59:15.248212596Z" level=info msg="Starting up"
	Apr 16 17:00:15 ha-022600-m02 dockerd[1019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0416 17:00:15.146611   12816 out.go:239] * 
	W0416 17:00:15.147806   12816 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 17:00:15.148383   12816 out.go:177] 
	
	
	==> Docker <==
	Apr 16 17:00:33 ha-022600 dockerd[1325]: 2024/04/16 17:00:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:00:33 ha-022600 dockerd[1325]: 2024/04/16 17:00:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:00:33 ha-022600 dockerd[1325]: 2024/04/16 17:00:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:00:33 ha-022600 dockerd[1325]: 2024/04/16 17:00:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:00:33 ha-022600 dockerd[1325]: 2024/04/16 17:00:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:00:33 ha-022600 dockerd[1325]: 2024/04/16 17:00:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:00:46 ha-022600 dockerd[1331]: time="2024-04-16T17:00:46.155449869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 17:00:46 ha-022600 dockerd[1331]: time="2024-04-16T17:00:46.155537174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 17:00:46 ha-022600 dockerd[1331]: time="2024-04-16T17:00:46.155576176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 17:00:46 ha-022600 dockerd[1331]: time="2024-04-16T17:00:46.156267612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 17:00:46 ha-022600 cri-dockerd[1232]: time="2024-04-16T17:00:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8a4de3aa24af1283627968e3b5972a40e7430994e81c6c1dc2f08b918b9b3ce1/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 16 17:00:47 ha-022600 cri-dockerd[1232]: time="2024-04-16T17:00:47Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 16 17:00:47 ha-022600 dockerd[1331]: time="2024-04-16T17:00:47.477180079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 17:00:47 ha-022600 dockerd[1331]: time="2024-04-16T17:00:47.477294385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 17:00:47 ha-022600 dockerd[1331]: time="2024-04-16T17:00:47.477311186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 17:00:47 ha-022600 dockerd[1331]: time="2024-04-16T17:00:47.478281439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 17:13:05 ha-022600 dockerd[1325]: 2024/04/16 17:13:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:05 ha-022600 dockerd[1325]: 2024/04/16 17:13:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:05 ha-022600 dockerd[1325]: 2024/04/16 17:13:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:05 ha-022600 dockerd[1325]: 2024/04/16 17:13:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:05 ha-022600 dockerd[1325]: 2024/04/16 17:13:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:05 ha-022600 dockerd[1325]: 2024/04/16 17:13:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:05 ha-022600 dockerd[1325]: 2024/04/16 17:13:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:05 ha-022600 dockerd[1325]: 2024/04/16 17:13:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:05 ha-022600 dockerd[1325]: 2024/04/16 17:13:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d38b1a5f4caa8       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   13 minutes ago      Running             busybox                   0                   8a4de3aa24af1       busybox-7fdf7869d9-rpfpf
	3fe545bfad4e6       cbb01a7bd410d                                                                                         16 minutes ago      Running             coredns                   0                   093278b3840ef       coredns-76f75df574-qm89x
	979dee88be2b4       cbb01a7bd410d                                                                                         16 minutes ago      Running             coredns                   0                   4ad38b0d59335       coredns-76f75df574-ww2r6
	257879ecf06b2       6e38f40d628db                                                                                         16 minutes ago      Running             storage-provisioner       0                   bf991c3e34e2d       storage-provisioner
	be245de9ef545       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              16 minutes ago      Running             kindnet-cni               0                   92c35b3fd0967       kindnet-mwqvl
	05db92f49e0df       a1d263b5dc5b0                                                                                         16 minutes ago      Running             kube-proxy                0                   12380f49c1509       kube-proxy-2vddt
	d1ba82cd26254       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     17 minutes ago      Running             kube-vip                  0                   fa2c75c4c8d59       kube-vip-ha-022600
	a7fb69539df62       6052a25da3f97                                                                                         17 minutes ago      Running             kube-controller-manager   0                   b536621e20d4b       kube-controller-manager-ha-022600
	4fd5df8c9fd37       39f995c9f1996                                                                                         17 minutes ago      Running             kube-apiserver            0                   5a7a1e80caeb4       kube-apiserver-ha-022600
	e042d71e8b0e8       8c390d98f50c0                                                                                         17 minutes ago      Running             kube-scheduler            0                   5a2551c91a1b6       kube-scheduler-ha-022600
	c29b0762ff0bf       3861cfcd7c04c                                                                                         17 minutes ago      Running             etcd                      0                   c8a9aa3126cf5       etcd-ha-022600
	
	
	==> coredns [3fe545bfad4e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = db872c9fdf31f8d8ff61123f2a1e38a38b951fa043b9e42cdb76f01d23889e560885a7bdef735e757fd28e65f13e44b1d5d7b5def31861f6a98cd0279fbc18c8
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:55981 - 4765 "HINFO IN 3735046377920793891.8143170502200932773. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.058936595s
	[INFO] 10.244.0.4:43350 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000388921s
	[INFO] 10.244.0.4:35317 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.052221997s
	[INFO] 10.244.0.4:52074 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.040069369s
	[INFO] 10.244.0.4:49068 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.053312593s
	[INFO] 10.244.0.4:54711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123507s
	[INFO] 10.244.0.4:44694 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037006811s
	[INFO] 10.244.0.4:33399 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124606s
	[INFO] 10.244.0.4:37329 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000241612s
	[INFO] 10.244.0.4:57333 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131407s
	[INFO] 10.244.0.4:38806 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060403s
	[INFO] 10.244.0.4:48948 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000263914s
	[INFO] 10.244.0.4:51825 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000177309s
	[INFO] 10.244.0.4:53272 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00018301s
	
	
	==> coredns [979dee88be2b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = db872c9fdf31f8d8ff61123f2a1e38a38b951fa043b9e42cdb76f01d23889e560885a7bdef735e757fd28e65f13e44b1d5d7b5def31861f6a98cd0279fbc18c8
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50127 - 24072 "HINFO IN 7665836187497317301.2285362183679153792. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027543487s
	[INFO] 10.244.0.4:34822 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000224011s
	[INFO] 10.244.0.4:48911 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000349218s
	[INFO] 10.244.0.4:43369 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.023699624s
	[INFO] 10.244.0.4:56309 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000258914s
	[INFO] 10.244.0.4:36791 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.003463479s
	[INFO] 10.244.0.4:55996 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000301816s
	[INFO] 10.244.0.4:35967 - 5 "PTR IN 1.80.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000116506s
	
	
	==> describe nodes <==
	Name:               ha-022600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-022600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=ha-022600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T16_56_39_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 16:56:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-022600
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 17:13:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 17:11:27 +0000   Tue, 16 Apr 2024 16:56:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 17:11:27 +0000   Tue, 16 Apr 2024 16:56:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 17:11:27 +0000   Tue, 16 Apr 2024 16:56:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 17:11:27 +0000   Tue, 16 Apr 2024 16:57:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.81.207
	  Hostname:    ha-022600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b4674338fa494bbcb2e21e2b4385c5e1
	  System UUID:                201025fc-0c03-cc49-a194-29d6500971a2
	  Boot ID:                    6ae5bedd-6e8e-4f58-b08c-8e9912fd04de
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-rpfpf             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-76f75df574-qm89x             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-76f75df574-ww2r6             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-022600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-mwqvl                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-022600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-ha-022600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-2vddt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-022600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-vip-ha-022600                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 17m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m   kubelet          Node ha-022600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m   kubelet          Node ha-022600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m   kubelet          Node ha-022600 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m   node-controller  Node ha-022600 event: Registered Node ha-022600 in Controller
	  Normal  NodeReady                16m   kubelet          Node ha-022600 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.656516] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr16 16:55] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.165290] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[Apr16 16:56] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.091843] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.493988] systemd-fstab-generator[987]: Ignoring "noauto" option for root device
	[  +0.172637] systemd-fstab-generator[999]: Ignoring "noauto" option for root device
	[  +0.230010] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	[  +2.695048] systemd-fstab-generator[1185]: Ignoring "noauto" option for root device
	[  +0.219400] systemd-fstab-generator[1197]: Ignoring "noauto" option for root device
	[  +0.196554] systemd-fstab-generator[1209]: Ignoring "noauto" option for root device
	[  +0.267217] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
	[ +11.053282] systemd-fstab-generator[1317]: Ignoring "noauto" option for root device
	[  +0.095458] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.012264] systemd-fstab-generator[1521]: Ignoring "noauto" option for root device
	[  +6.758798] systemd-fstab-generator[1720]: Ignoring "noauto" option for root device
	[  +0.093227] kauditd_printk_skb: 73 callbacks suppressed
	[  +7.850543] systemd-fstab-generator[2213]: Ignoring "noauto" option for root device
	[  +0.130310] kauditd_printk_skb: 72 callbacks suppressed
	[ +15.381320] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.386371] kauditd_printk_skb: 29 callbacks suppressed
	[Apr16 17:00] hrtimer: interrupt took 5042261 ns
	[  +0.908827] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [c29b0762ff0b] <==
	{"level":"info","ts":"2024-04-16T16:56:33.030826Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fac5e7781389861 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-16T16:56:33.031001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fac5e7781389861 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-16T16:56:33.031092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fac5e7781389861 received MsgPreVoteResp from 6fac5e7781389861 at term 1"}
	{"level":"info","ts":"2024-04-16T16:56:33.031194Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fac5e7781389861 became candidate at term 2"}
	{"level":"info","ts":"2024-04-16T16:56:33.031296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fac5e7781389861 received MsgVoteResp from 6fac5e7781389861 at term 2"}
	{"level":"info","ts":"2024-04-16T16:56:33.031401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fac5e7781389861 became leader at term 2"}
	{"level":"info","ts":"2024-04-16T16:56:33.031482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6fac5e7781389861 elected leader 6fac5e7781389861 at term 2"}
	{"level":"info","ts":"2024-04-16T16:56:33.035895Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T16:56:33.039506Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"15aeb555476ef740","local-member-id":"6fac5e7781389861","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T16:56:33.039923Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T16:56:33.041955Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T16:56:33.042053Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"6fac5e7781389861","local-member-attributes":"{Name:ha-022600 ClientURLs:[https://172.19.81.207:2379]}","request-path":"/0/members/6fac5e7781389861/attributes","cluster-id":"15aeb555476ef740","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-16T16:56:33.042276Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T16:56:33.044336Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.81.207:2379"}
	{"level":"info","ts":"2024-04-16T16:56:33.052851Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T16:56:33.052893Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-16T16:56:33.055802Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-16T16:56:33.063928Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-16T16:57:01.41567Z","caller":"traceutil/trace.go:171","msg":"trace[1184878888] transaction","detail":"{read_only:false; response_revision:415; number_of_response:1; }","duration":"279.327005ms","start":"2024-04-16T16:57:01.136324Z","end":"2024-04-16T16:57:01.415651Z","steps":["trace[1184878888] 'process raft request'  (duration: 279.236301ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:06:33.350784Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":964}
	{"level":"info","ts":"2024-04-16T17:06:33.393755Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":964,"took":"42.49244ms","hash":1730924367,"current-db-size-bytes":2433024,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2433024,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-04-16T17:06:33.395361Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1730924367,"revision":964,"compact-revision":-1}
	{"level":"info","ts":"2024-04-16T17:11:33.360995Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1502}
	{"level":"info","ts":"2024-04-16T17:11:33.366072Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1502,"took":"4.116913ms","hash":127222243,"current-db-size-bytes":2433024,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1818624,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-04-16T17:11:33.366162Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":127222243,"revision":1502,"compact-revision":964}
	
	
	==> kernel <==
	 17:13:47 up 19 min,  0 users,  load average: 0.47, 0.30, 0.20
	Linux ha-022600 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [be245de9ef54] <==
	I0416 17:11:41.183707       1 main.go:227] handling current node
	I0416 17:11:51.197478       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:11:51.197581       1 main.go:227] handling current node
	I0416 17:12:01.211572       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:12:01.212259       1 main.go:227] handling current node
	I0416 17:12:11.222555       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:12:11.222612       1 main.go:227] handling current node
	I0416 17:12:21.232171       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:12:21.232252       1 main.go:227] handling current node
	I0416 17:12:31.241317       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:12:31.241431       1 main.go:227] handling current node
	I0416 17:12:41.254415       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:12:41.254517       1 main.go:227] handling current node
	I0416 17:12:51.270840       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:12:51.270884       1 main.go:227] handling current node
	I0416 17:13:01.279901       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:13:01.279950       1 main.go:227] handling current node
	I0416 17:13:11.289371       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:13:11.289481       1 main.go:227] handling current node
	I0416 17:13:21.293849       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:13:21.293950       1 main.go:227] handling current node
	I0416 17:13:31.300301       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:13:31.300352       1 main.go:227] handling current node
	I0416 17:13:41.310131       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:13:41.310168       1 main.go:227] handling current node
	
	
	==> kube-apiserver [4fd5df8c9fd3] <==
	I0416 16:56:35.504221       1 shared_informer.go:318] Caches are synced for configmaps
	I0416 16:56:35.510308       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0416 16:56:35.512679       1 aggregator.go:165] initial CRD sync complete...
	I0416 16:56:35.516211       1 autoregister_controller.go:141] Starting autoregister controller
	I0416 16:56:35.516249       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0416 16:56:35.516256       1 cache.go:39] Caches are synced for autoregister controller
	I0416 16:56:35.517473       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0416 16:56:35.522352       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0416 16:56:35.529558       1 controller.go:624] quota admission added evaluator for: namespaces
	I0416 16:56:35.542494       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0416 16:56:36.411016       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0416 16:56:36.418409       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0416 16:56:36.419376       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0416 16:56:37.172553       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0416 16:56:37.235069       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0416 16:56:37.370838       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0416 16:56:37.381797       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.81.207]
	I0416 16:56:37.383264       1 controller.go:624] quota admission added evaluator for: endpoints
	I0416 16:56:37.388718       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0416 16:56:37.435733       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0416 16:56:38.737496       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0416 16:56:38.764389       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0416 16:56:38.781093       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0416 16:56:51.466047       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0416 16:56:51.868826       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [a7fb69539df6] <==
	I0416 16:56:52.089701       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-qm89x"
	I0416 16:56:52.106572       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="633.961814ms"
	I0416 16:56:52.122316       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="15.686904ms"
	I0416 16:56:52.190000       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="67.640369ms"
	I0416 16:56:52.190122       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="63.603µs"
	I0416 16:57:04.964104       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="174.809µs"
	I0416 16:57:04.979092       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="587.33µs"
	I0416 16:57:04.995404       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="188.309µs"
	I0416 16:57:05.057328       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="96.005µs"
	I0416 16:57:05.964586       1 node_lifecycle_controller.go:1045] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0416 16:57:07.181900       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="167.009µs"
	I0416 16:57:07.224163       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="15.307781ms"
	I0416 16:57:07.224903       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="88.905µs"
	I0416 16:57:07.277301       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="19.898845ms"
	I0416 16:57:07.277810       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="53.303µs"
	I0416 17:00:45.709324       1 event.go:376] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7fdf7869d9 to 3"
	I0416 17:00:45.728545       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-rpfpf"
	I0416 17:00:45.745464       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-mnl84"
	I0416 17:00:45.756444       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-gph6r"
	I0416 17:00:45.770175       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="62.082711ms"
	I0416 17:00:45.784213       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="13.744211ms"
	I0416 17:00:45.810992       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="26.530372ms"
	I0416 17:00:45.811146       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="42.802µs"
	I0416 17:00:48.413892       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="8.465463ms"
	I0416 17:00:48.413981       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="40.302µs"
	
	
	==> kube-proxy [05db92f49e0d] <==
	I0416 16:56:54.468581       1 server_others.go:72] "Using iptables proxy"
	I0416 16:56:54.505964       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.81.207"]
	I0416 16:56:54.583838       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 16:56:54.584172       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 16:56:54.584273       1 server_others.go:168] "Using iptables Proxier"
	I0416 16:56:54.590060       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 16:56:54.590806       1 server.go:865] "Version info" version="v1.29.3"
	I0416 16:56:54.591014       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 16:56:54.592331       1 config.go:188] "Starting service config controller"
	I0416 16:56:54.592517       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 16:56:54.592625       1 config.go:97] "Starting endpoint slice config controller"
	I0416 16:56:54.592689       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 16:56:54.594058       1 config.go:315] "Starting node config controller"
	I0416 16:56:54.594215       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 16:56:54.693900       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 16:56:54.693964       1 shared_informer.go:318] Caches are synced for service config
	I0416 16:56:54.694328       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [e042d71e8b0e] <==
	W0416 16:56:36.501819       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0416 16:56:36.501922       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0416 16:56:36.507709       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0416 16:56:36.507948       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0416 16:56:36.573671       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 16:56:36.573877       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 16:56:36.602162       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 16:56:36.602205       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 16:56:36.621966       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.622272       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0416 16:56:36.648392       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0416 16:56:36.648623       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0416 16:56:36.694872       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 16:56:36.694970       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 16:56:36.804118       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.805424       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 16:56:36.821863       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.822231       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 16:56:36.866017       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 16:56:36.866298       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0416 16:56:36.904820       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 16:56:36.905097       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 16:56:36.917996       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.918036       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0416 16:56:39.298679       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 17:09:38 ha-022600 kubelet[2220]: E0416 17:09:38.995960    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:09:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:09:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:09:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:09:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:10:38 ha-022600 kubelet[2220]: E0416 17:10:38.996042    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:10:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:10:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:10:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:10:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:11:39 ha-022600 kubelet[2220]: E0416 17:11:39.004964    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:11:39 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:11:39 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:11:39 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:11:39 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:12:38 ha-022600 kubelet[2220]: E0416 17:12:38.993284    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:12:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:12:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:12:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:12:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:13:38 ha-022600 kubelet[2220]: E0416 17:13:38.996896    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:13:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:13:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:13:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:13:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [257879ecf06b] <==
	I0416 16:57:06.255455       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0416 16:57:06.278834       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0416 16:57:06.280824       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0416 16:57:06.296912       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0416 16:57:06.297990       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-022600_72f60c68-7530-4649-9313-75b903c805c4!
	I0416 16:57:06.297754       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e843972e-8d36-423a-bd47-42ea404826e6", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-022600_72f60c68-7530-4649-9313-75b903c805c4 became leader
	I0416 16:57:06.399840       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-022600_72f60c68-7530-4649-9313-75b903c805c4!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 17:13:40.560905    4748 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-022600 -n ha-022600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-022600 -n ha-022600: (10.8168093s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-022600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7fdf7869d9-gph6r busybox-7fdf7869d9-mnl84
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-022600 describe pod busybox-7fdf7869d9-gph6r busybox-7fdf7869d9-mnl84
helpers_test.go:282: (dbg) kubectl --context ha-022600 describe pod busybox-7fdf7869d9-gph6r busybox-7fdf7869d9-mnl84:

                                                
                                                
-- stdout --
	Name:             busybox-7fdf7869d9-gph6r
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7fdf7869d9
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7fdf7869d9
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h29q5 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-h29q5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m49s (x3 over 13m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	
	
	Name:             busybox-7fdf7869d9-mnl84
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7fdf7869d9
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7fdf7869d9
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xhwqb (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-xhwqb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m49s (x3 over 13m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (41.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (239.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-022600 -v=7 --alsologtostderr
E0416 17:16:06.860430    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-022600 -v=7 --alsologtostderr: (2m57.3754848s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-022600 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-022600 status -v=7 --alsologtostderr: exit status 2 (32.0124862s)

                                                
                                                
-- stdout --
	ha-022600
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-022600-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-022600-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 17:16:56.919521   10008 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0416 17:16:56.969976   10008 out.go:291] Setting OutFile to fd 848 ...
	I0416 17:16:56.970764   10008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:16:56.970764   10008 out.go:304] Setting ErrFile to fd 876...
	I0416 17:16:56.970881   10008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:16:56.986767   10008 out.go:298] Setting JSON to false
	I0416 17:16:56.986833   10008 mustload.go:65] Loading cluster: ha-022600
	I0416 17:16:56.986911   10008 notify.go:220] Checking for updates...
	I0416 17:16:56.987564   10008 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 17:16:56.987564   10008 status.go:255] checking status of ha-022600 ...
	I0416 17:16:56.988027   10008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 17:16:58.908065   10008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:16:58.908065   10008 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:16:58.908065   10008 status.go:330] ha-022600 host status = "Running" (err=<nil>)
	I0416 17:16:58.908233   10008 host.go:66] Checking if "ha-022600" exists ...
	I0416 17:16:58.908832   10008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 17:17:00.844213   10008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:17:00.844213   10008 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:17:00.844213   10008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 17:17:03.119935   10008 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 17:17:03.119935   10008 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:17:03.119935   10008 host.go:66] Checking if "ha-022600" exists ...
	I0416 17:17:03.130735   10008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 17:17:03.131565   10008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 17:17:05.093294   10008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:17:05.093294   10008 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:17:05.094371   10008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 17:17:07.394959   10008 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 17:17:07.395752   10008 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:17:07.395816   10008 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 17:17:07.496929   10008 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.3659472s)
	I0416 17:17:07.505311   10008 ssh_runner.go:195] Run: systemctl --version
	I0416 17:17:07.524317   10008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 17:17:07.547885   10008 kubeconfig.go:125] found "ha-022600" server: "https://172.19.95.254:8443"
	I0416 17:17:07.547946   10008 api_server.go:166] Checking apiserver status ...
	I0416 17:17:07.556446   10008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:17:07.592373   10008 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2100/cgroup
	W0416 17:17:07.610224   10008 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2100/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 17:17:07.619739   10008 ssh_runner.go:195] Run: ls
	I0416 17:17:07.627455   10008 api_server.go:253] Checking apiserver healthz at https://172.19.95.254:8443/healthz ...
	I0416 17:17:07.635401   10008 api_server.go:279] https://172.19.95.254:8443/healthz returned 200:
	ok
	I0416 17:17:07.635495   10008 status.go:422] ha-022600 apiserver status = Running (err=<nil>)
	I0416 17:17:07.635495   10008 status.go:257] ha-022600 status: &{Name:ha-022600 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 17:17:07.635536   10008 status.go:255] checking status of ha-022600-m02 ...
	I0416 17:17:07.635685   10008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 17:17:09.607619   10008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:17:09.607619   10008 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:17:09.607619   10008 status.go:330] ha-022600-m02 host status = "Running" (err=<nil>)
	I0416 17:17:09.607619   10008 host.go:66] Checking if "ha-022600-m02" exists ...
	I0416 17:17:09.608618   10008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 17:17:11.629490   10008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:17:11.629569   10008 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:17:11.629569   10008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:17:13.962006   10008 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 17:17:13.962869   10008 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:17:13.962869   10008 host.go:66] Checking if "ha-022600-m02" exists ...
	I0416 17:17:13.971663   10008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 17:17:13.971663   10008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 17:17:15.824357   10008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:17:15.825301   10008 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:17:15.825387   10008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:17:18.103583   10008 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 17:17:18.103583   10008 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:17:18.103583   10008 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 17:17:18.203049   10008 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.2311455s)
	I0416 17:17:18.212684   10008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 17:17:18.237522   10008 kubeconfig.go:125] found "ha-022600" server: "https://172.19.95.254:8443"
	I0416 17:17:18.237558   10008 api_server.go:166] Checking apiserver status ...
	I0416 17:17:18.248487   10008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0416 17:17:18.270466   10008 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0416 17:17:18.270466   10008 status.go:422] ha-022600-m02 apiserver status = Stopped (err=<nil>)
	I0416 17:17:18.270466   10008 status.go:257] ha-022600-m02 status: &{Name:ha-022600-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 17:17:18.270466   10008 status.go:255] checking status of ha-022600-m03 ...
	I0416 17:17:18.271370   10008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m03 ).state
	I0416 17:17:20.225996   10008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:17:20.226454   10008 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:17:20.226518   10008 status.go:330] ha-022600-m03 host status = "Running" (err=<nil>)
	I0416 17:17:20.226518   10008 host.go:66] Checking if "ha-022600-m03" exists ...
	I0416 17:17:20.227211   10008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m03 ).state
	I0416 17:17:22.161371   10008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:17:22.161443   10008 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:17:22.161524   10008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 17:17:24.447855   10008 main.go:141] libmachine: [stdout =====>] : 172.19.93.94
	
	I0416 17:17:24.448570   10008 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:17:24.448570   10008 host.go:66] Checking if "ha-022600-m03" exists ...
	I0416 17:17:24.458064   10008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 17:17:24.458775   10008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m03 ).state
	I0416 17:17:26.369973   10008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:17:26.369973   10008 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:17:26.369973   10008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 17:17:28.667516   10008 main.go:141] libmachine: [stdout =====>] : 172.19.93.94
	
	I0416 17:17:28.667516   10008 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:17:28.667929   10008 sshutil.go:53] new ssh client: &{IP:172.19.93.94 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m03\id_rsa Username:docker}
	I0416 17:17:28.768986   10008 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.3101s)
	I0416 17:17:28.778883   10008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 17:17:28.803577   10008 status.go:257] ha-022600-m03 status: &{Name:ha-022600-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:236: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-022600 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-022600 -n ha-022600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-022600 -n ha-022600: (10.8772723s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-022600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-022600 logs -n 25: (7.4949396s)
helpers_test.go:252: TestMultiControlPlane/serial/AddWorkerNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:10 UTC | 16 Apr 24 17:10 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:10 UTC | 16 Apr 24 17:10 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:10 UTC | 16 Apr 24 17:10 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:11 UTC | 16 Apr 24 17:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:11 UTC | 16 Apr 24 17:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:11 UTC | 16 Apr 24 17:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:11 UTC | 16 Apr 24 17:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-gph6r --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-mnl84 --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | busybox-7fdf7869d9-rpfpf --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-gph6r --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-mnl84 --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | busybox-7fdf7869d9-rpfpf --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-gph6r -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-mnl84 -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | busybox-7fdf7869d9-rpfpf -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC | 16 Apr 24 17:13 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC |                     |
	|         | busybox-7fdf7869d9-gph6r             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC |                     |
	|         | busybox-7fdf7869d9-mnl84             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC | 16 Apr 24 17:13 UTC |
	|         | busybox-7fdf7869d9-rpfpf             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC |                     |
	|         | busybox-7fdf7869d9-rpfpf -- sh       |           |                   |                |                     |                     |
	|         | -c ping -c 1 172.19.80.1             |           |                   |                |                     |                     |
	| node    | add -p ha-022600 -v=7                | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC | 16 Apr 24 17:16 UTC |
	|         | --alsologtostderr                    |           |                   |                |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 16:53:50
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 16:53:50.116950   12816 out.go:291] Setting OutFile to fd 784 ...
	I0416 16:53:50.117952   12816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:53:50.117952   12816 out.go:304] Setting ErrFile to fd 696...
	I0416 16:53:50.117952   12816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:53:50.138920   12816 out.go:298] Setting JSON to false
	I0416 16:53:50.141501   12816 start.go:129] hostinfo: {"hostname":"minikube5","uptime":24059,"bootTime":1713262370,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0416 16:53:50.141501   12816 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0416 16:53:50.143700   12816 out.go:177] * [ha-022600] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0416 16:53:50.144387   12816 notify.go:220] Checking for updates...
	I0416 16:53:50.144982   12816 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:53:50.145881   12816 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 16:53:50.146373   12816 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0416 16:53:50.146987   12816 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 16:53:50.147788   12816 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 16:53:50.149250   12816 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 16:53:54.959514   12816 out.go:177] * Using the hyperv driver based on user configuration
	I0416 16:53:54.959811   12816 start.go:297] selected driver: hyperv
	I0416 16:53:54.959811   12816 start.go:901] validating driver "hyperv" against <nil>
	I0416 16:53:54.959811   12816 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 16:53:55.002641   12816 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 16:53:55.003374   12816 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 16:53:55.003816   12816 cni.go:84] Creating CNI manager for ""
	I0416 16:53:55.003816   12816 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0416 16:53:55.003816   12816 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0416 16:53:55.003816   12816 start.go:340] cluster config:
	{Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:53:55.003816   12816 iso.go:125] acquiring lock: {Name:mka3f8eef32f5becd06d05d1d837c2a92a8fa70c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 16:53:55.005700   12816 out.go:177] * Starting "ha-022600" primary control-plane node in "ha-022600" cluster
	I0416 16:53:55.006053   12816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:53:55.006397   12816 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0416 16:53:55.006397   12816 cache.go:56] Caching tarball of preloaded images
	I0416 16:53:55.006539   12816 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 16:53:55.006809   12816 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 16:53:55.007075   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:53:55.007821   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json: {Name:mkc2f9747189bfa0db5ea21e93e1afafc0e89eb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:53:55.008149   12816 start.go:360] acquireMachinesLock for ha-022600: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:53:55.009151   12816 start.go:364] duration metric: took 1.0024ms to acquireMachinesLock for "ha-022600"
	I0416 16:53:55.009151   12816 start.go:93] Provisioning new machine with config: &{Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 16:53:55.009151   12816 start.go:125] createHost starting for "" (driver="hyperv")
	I0416 16:53:55.010175   12816 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 16:53:55.010397   12816 start.go:159] libmachine.API.Create for "ha-022600" (driver="hyperv")
	I0416 16:53:55.010397   12816 client.go:168] LocalClient.Create starting
	I0416 16:53:55.010740   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0416 16:53:55.011023   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:53:55.011023   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:53:55.011200   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0416 16:53:55.011403   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:53:55.011403   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:53:55.011541   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0416 16:53:56.852843   12816 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0416 16:53:56.852843   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:53:56.853713   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0416 16:53:58.346838   12816 main.go:141] libmachine: [stdout =====>] : False
	
	I0416 16:53:58.346838   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:53:58.347399   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:53:59.667129   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:53:59.667129   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:53:59.667644   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:54:02.789332   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:54:02.789332   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:02.791736   12816 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 16:54:03.131710   12816 main.go:141] libmachine: Creating SSH key...
	I0416 16:54:03.273248   12816 main.go:141] libmachine: Creating VM...
	I0416 16:54:03.273248   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:54:05.824835   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:54:05.824835   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:05.824937   12816 main.go:141] libmachine: Using switch "Default Switch"
	I0416 16:54:05.825022   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:54:07.398351   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:54:07.398635   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:07.398635   12816 main.go:141] libmachine: Creating VHD
	I0416 16:54:07.398733   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0416 16:54:10.982944   12816 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : E9EB5342-E929-43B6-8B97-D7BDD354CEE1
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0416 16:54:10.983213   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:10.983213   12816 main.go:141] libmachine: Writing magic tar header
	I0416 16:54:10.983213   12816 main.go:141] libmachine: Writing SSH key tar header
	I0416 16:54:10.992883   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0416 16:54:13.950584   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:13.950584   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:13.950584   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\disk.vhd' -SizeBytes 20000MB
	I0416 16:54:16.287736   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:16.287736   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:16.287736   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-022600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0416 16:54:19.439740   12816 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-022600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0416 16:54:19.439740   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:19.439740   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-022600 -DynamicMemoryEnabled $false
	I0416 16:54:21.396684   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:21.396684   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:21.397696   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-022600 -Count 2
	I0416 16:54:23.301369   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:23.301369   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:23.302296   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-022600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\boot2docker.iso'
	I0416 16:54:25.540957   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:25.540957   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:25.541060   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-022600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\disk.vhd'
	I0416 16:54:27.919093   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:27.919302   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:27.919302   12816 main.go:141] libmachine: Starting VM...
	I0416 16:54:27.919462   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-022600
	I0416 16:54:30.480037   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:30.480279   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:30.480279   12816 main.go:141] libmachine: Waiting for host to start...
	I0416 16:54:30.480279   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:32.483346   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:32.484152   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:32.484152   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:34.785082   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:34.785271   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:35.799483   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:37.788691   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:37.788691   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:37.788898   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:40.058231   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:40.058742   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:41.064074   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:43.063862   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:43.063862   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:43.063862   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:45.301253   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:45.301420   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:46.309647   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:48.337653   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:48.337653   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:48.337653   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:50.614494   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:50.615195   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:51.620909   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:53.639317   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:53.640351   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:53.640405   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:55.942630   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:54:55.943393   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:55.943471   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:57.836545   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:57.836545   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:57.837395   12816 machine.go:94] provisionDockerMachine start ...
	I0416 16:54:57.837474   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:59.762683   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:59.763360   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:59.763440   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:02.003751   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:02.003751   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:02.010689   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:02.023158   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:02.023158   12816 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 16:55:02.152140   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 16:55:02.152244   12816 buildroot.go:166] provisioning hostname "ha-022600"
	I0416 16:55:02.152322   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:03.956913   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:03.956913   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:03.957618   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:06.305236   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:06.305236   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:06.309822   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:06.310484   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:06.310484   12816 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-022600 && echo "ha-022600" | sudo tee /etc/hostname
	I0416 16:55:06.479074   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-022600
	
	I0416 16:55:06.479182   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:08.433073   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:08.433999   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:08.433999   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:10.792893   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:10.792893   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:10.796713   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:10.797321   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:10.797321   12816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-022600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-022600/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-022600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:55:10.944702   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:55:10.944870   12816 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 16:55:10.944983   12816 buildroot.go:174] setting up certificates
	I0416 16:55:10.944983   12816 provision.go:84] configureAuth start
	I0416 16:55:10.945092   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:12.932736   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:12.932736   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:12.933614   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:15.203758   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:15.203758   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:15.203758   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:17.088226   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:17.088226   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:17.088334   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:19.325791   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:19.326294   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:19.326294   12816 provision.go:143] copyHostCerts
	I0416 16:55:19.326294   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 16:55:19.326294   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 16:55:19.326294   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 16:55:19.326900   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 16:55:19.328097   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 16:55:19.328097   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 16:55:19.328097   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 16:55:19.328097   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 16:55:19.329417   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 16:55:19.329417   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 16:55:19.329417   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 16:55:19.330063   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 16:55:19.330726   12816 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-022600 san=[127.0.0.1 172.19.81.207 ha-022600 localhost minikube]
	I0416 16:55:19.539117   12816 provision.go:177] copyRemoteCerts
	I0416 16:55:19.547114   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:55:19.547114   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:21.440985   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:21.440985   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:21.440985   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:23.726564   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:23.726564   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:23.727019   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:55:23.834423   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.287066s)
	I0416 16:55:23.834577   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 16:55:23.835008   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 16:55:23.874966   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 16:55:23.875470   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0416 16:55:23.923921   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 16:55:23.923921   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 16:55:23.965042   12816 provision.go:87] duration metric: took 13.0192422s to configureAuth
	I0416 16:55:23.965042   12816 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:55:23.965741   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:55:23.965827   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:25.905339   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:25.905903   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:25.905986   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:28.170079   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:28.170419   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:28.173356   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:28.173937   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:28.173937   12816 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 16:55:28.301727   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 16:55:28.301727   12816 buildroot.go:70] root file system type: tmpfs
	I0416 16:55:28.302425   12816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 16:55:28.302506   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:30.181808   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:30.181808   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:30.181889   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:32.394860   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:32.394860   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:32.398667   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:32.399299   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:32.399475   12816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 16:55:32.556658   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 16:55:32.556887   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:34.446928   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:34.446969   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:34.447053   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:36.709442   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:36.710242   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:36.714111   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:36.714437   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:36.714437   12816 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 16:55:38.655929   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 16:55:38.655929   12816 machine.go:97] duration metric: took 40.8162201s to provisionDockerMachine
	I0416 16:55:38.656036   12816 client.go:171] duration metric: took 1m43.6397622s to LocalClient.Create
	I0416 16:55:38.656036   12816 start.go:167] duration metric: took 1m43.6397622s to libmachine.API.Create "ha-022600"
	I0416 16:55:38.656036   12816 start.go:293] postStartSetup for "ha-022600" (driver="hyperv")
	I0416 16:55:38.656036   12816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:55:38.665072   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:55:38.665072   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:40.514910   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:40.514910   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:40.515910   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:42.764754   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:42.765404   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:42.765404   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:55:42.879399   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2140881s)
	I0416 16:55:42.892410   12816 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:55:42.899117   12816 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:55:42.899117   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 16:55:42.899734   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 16:55:42.901086   12816 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 16:55:42.901154   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 16:55:42.911237   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 16:55:42.927664   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 16:55:42.975440   12816 start.go:296] duration metric: took 4.3191592s for postStartSetup
	I0416 16:55:42.977201   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:44.830945   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:44.830945   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:44.831562   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:47.134349   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:47.134788   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:47.135000   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:55:47.137270   12816 start.go:128] duration metric: took 1m52.1217609s to createHost
	I0416 16:55:47.137270   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:49.024055   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:49.024055   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:49.024657   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:51.238446   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:51.238526   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:51.242455   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:51.243052   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:51.243052   12816 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 16:55:51.369469   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713286551.540248133
	
	I0416 16:55:51.369575   12816 fix.go:216] guest clock: 1713286551.540248133
	I0416 16:55:51.369575   12816 fix.go:229] Guest: 2024-04-16 16:55:51.540248133 +0000 UTC Remote: 2024-04-16 16:55:47.1372703 +0000 UTC m=+117.146546101 (delta=4.402977833s)
	I0416 16:55:51.369790   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:53.407581   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:53.407727   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:53.407727   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:55.663769   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:55.663769   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:55.667543   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:55.667688   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:55.667688   12816 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713286551
	I0416 16:55:55.810591   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 16:55:51 UTC 2024
	
	I0416 16:55:55.810700   12816 fix.go:236] clock set: Tue Apr 16 16:55:51 UTC 2024
	 (err=<nil>)
	I0416 16:55:55.810700   12816 start.go:83] releasing machines lock for "ha-022600", held for 2m0.7946995s
	I0416 16:55:55.810965   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:57.710878   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:57.710878   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:57.711672   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:59.985139   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:59.985210   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:59.988730   12816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:55:59.988803   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:59.998550   12816 ssh_runner.go:195] Run: cat /version.json
	I0416 16:55:59.998550   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:01.993954   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:01.993954   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:01.993954   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:01.995788   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:01.995959   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:01.996084   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:04.379274   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:56:04.379356   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:04.379701   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:56:04.391360   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:56:04.392161   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:04.392520   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:56:04.469159   12816 ssh_runner.go:235] Completed: cat /version.json: (4.4703555s)
	I0416 16:56:04.479363   12816 ssh_runner.go:195] Run: systemctl --version
	I0416 16:56:04.584079   12816 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5950892s)
	I0416 16:56:04.593130   12816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 16:56:04.602217   12816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:56:04.610705   12816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:56:04.639084   12816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 16:56:04.639119   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:56:04.639119   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:56:04.684127   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 16:56:04.713899   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 16:56:04.734297   12816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 16:56:04.745020   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 16:56:04.776657   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:56:04.806087   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 16:56:04.854166   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:56:04.890388   12816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:56:04.918140   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 16:56:04.946595   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 16:56:04.975408   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 16:56:05.001633   12816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:56:05.028505   12816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:56:05.053299   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:05.230466   12816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 16:56:05.260161   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:56:05.269988   12816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 16:56:05.302694   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:56:05.335619   12816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 16:56:05.368663   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:56:05.402792   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:56:05.435612   12816 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 16:56:05.483431   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:56:05.505797   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:56:05.548843   12816 ssh_runner.go:195] Run: which cri-dockerd
	I0416 16:56:05.563980   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 16:56:05.582552   12816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 16:56:05.624048   12816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 16:56:05.804495   12816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 16:56:05.984936   12816 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 16:56:05.985183   12816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 16:56:06.032244   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:06.217075   12816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 16:56:08.662995   12816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4457805s)
	I0416 16:56:08.670977   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0416 16:56:08.701542   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 16:56:08.730698   12816 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0416 16:56:08.941813   12816 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0416 16:56:09.145939   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:09.331138   12816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0416 16:56:09.370232   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 16:56:09.409657   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:09.615575   12816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0416 16:56:09.726879   12816 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0416 16:56:09.737760   12816 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0416 16:56:09.746450   12816 start.go:562] Will wait 60s for crictl version
	I0416 16:56:09.755840   12816 ssh_runner.go:195] Run: which crictl
	I0416 16:56:09.771470   12816 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 16:56:09.827603   12816 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0416 16:56:09.836477   12816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 16:56:09.874967   12816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 16:56:09.907967   12816 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
	I0416 16:56:09.908249   12816 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:6f:a4 Flags:up|broadcast|multicast|running}
	I0416 16:56:09.917049   12816 ip.go:210] interface addr: fe80::6b96:eca7:5afa:def5/64
	I0416 16:56:09.917049   12816 ip.go:210] interface addr: 172.19.80.1/20
	I0416 16:56:09.924842   12816 ssh_runner.go:195] Run: grep 172.19.80.1	host.minikube.internal$ /etc/hosts
	I0416 16:56:09.931830   12816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:56:09.968931   12816 kubeadm.go:877] updating cluster {Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 16:56:09.968931   12816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:56:09.975955   12816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 16:56:09.996899   12816 docker.go:685] Got preloaded images: 
	I0416 16:56:09.996899   12816 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0416 16:56:10.008276   12816 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0416 16:56:10.035609   12816 ssh_runner.go:195] Run: which lz4
	I0416 16:56:10.042582   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0416 16:56:10.050849   12816 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 16:56:10.058074   12816 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 16:56:10.058074   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0416 16:56:11.721910   12816 docker.go:649] duration metric: took 1.6789563s to copy over tarball
	I0416 16:56:11.731181   12816 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 16:56:20.333529   12816 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.60186s)
	I0416 16:56:20.333529   12816 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 16:56:20.400516   12816 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0416 16:56:20.419486   12816 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0416 16:56:20.469018   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:20.655543   12816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 16:56:23.229259   12816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5734984s)
	I0416 16:56:23.240705   12816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 16:56:23.262332   12816 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0416 16:56:23.262383   12816 cache_images.go:84] Images are preloaded, skipping loading
	I0416 16:56:23.262383   12816 kubeadm.go:928] updating node { 172.19.81.207 8443 v1.29.3 docker true true} ...
	I0416 16:56:23.262383   12816 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-022600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.81.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 16:56:23.270008   12816 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0416 16:56:23.307277   12816 cni.go:84] Creating CNI manager for ""
	I0416 16:56:23.307277   12816 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0416 16:56:23.307362   12816 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 16:56:23.307406   12816 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.81.207 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-022600 NodeName:ha-022600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.81.207"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.81.207 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 16:56:23.307691   12816 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.81.207
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-022600"
	  kubeletExtraArgs:
	    node-ip: 172.19.81.207
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.81.207"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 16:56:23.307749   12816 kube-vip.go:111] generating kube-vip config ...
	I0416 16:56:23.318492   12816 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0416 16:56:23.343950   12816 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0416 16:56:23.344258   12816 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.95.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0416 16:56:23.353585   12816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 16:56:23.370542   12816 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 16:56:23.379813   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0416 16:56:23.397865   12816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0416 16:56:23.432291   12816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 16:56:23.462868   12816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0416 16:56:23.492579   12816 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1351 bytes)
	I0416 16:56:23.534977   12816 ssh_runner.go:195] Run: grep 172.19.95.254	control-plane.minikube.internal$ /etc/hosts
	I0416 16:56:23.542734   12816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.95.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:56:23.575719   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:23.754395   12816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 16:56:23.781462   12816 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600 for IP: 172.19.81.207
	I0416 16:56:23.781462   12816 certs.go:194] generating shared ca certs ...
	I0416 16:56:23.781462   12816 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:23.782411   12816 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0416 16:56:23.782411   12816 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0416 16:56:23.783651   12816 certs.go:256] generating profile certs ...
	I0416 16:56:23.784402   12816 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.key
	I0416 16:56:23.784569   12816 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.crt with IP's: []
	I0416 16:56:23.984047   12816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.crt ...
	I0416 16:56:23.984047   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.crt: {Name:mk3ebdcb7f076a09a399313f7ed3edf14403a6a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:23.985977   12816 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.key ...
	I0416 16:56:23.985977   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.key: {Name:mk94343a485b04f4b25a0ccd3245e197e7ecbec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:23.986215   12816 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648
	I0416 16:56:23.987265   12816 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.81.207 172.19.95.254]
	I0416 16:56:24.317716   12816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648 ...
	I0416 16:56:24.317716   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648: {Name:mk30f7000427979a1bcf8d6fc3995d1f7ccc170c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.319797   12816 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648 ...
	I0416 16:56:24.319797   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648: {Name:mk95e9e3e0f96031ef005f6c36470c216303a111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.320163   12816 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt
	I0416 16:56:24.331288   12816 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key
	I0416 16:56:24.332214   12816 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key
	I0416 16:56:24.332214   12816 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt with IP's: []
	I0416 16:56:24.406574   12816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt ...
	I0416 16:56:24.406574   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt: {Name:mk73158a02cd8861e90a3b76d50704b360c358ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.407013   12816 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key ...
	I0416 16:56:24.407013   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key: {Name:mk6842e2af8fadaf278ec7592edd5bd96f07c8ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.408078   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 16:56:24.408945   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0416 16:56:24.409732   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0416 16:56:24.417870   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0416 16:56:24.418145   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem (1338 bytes)
	W0416 16:56:24.418533   12816 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324_empty.pem, impossibly tiny 0 bytes
	I0416 16:56:24.418533   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0416 16:56:24.418811   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0416 16:56:24.418990   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0416 16:56:24.419161   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0416 16:56:24.419368   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem (1708 bytes)
	I0416 16:56:24.419647   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /usr/share/ca-certificates/53242.pem
	I0416 16:56:24.419767   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:24.419867   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem -> /usr/share/ca-certificates/5324.pem
	I0416 16:56:24.420003   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 16:56:24.466985   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 16:56:24.509816   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 16:56:24.554817   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 16:56:24.603006   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0416 16:56:24.646596   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 16:56:24.694120   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 16:56:24.741669   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 16:56:24.785888   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /usr/share/ca-certificates/53242.pem (1708 bytes)
	I0416 16:56:24.829403   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 16:56:24.891821   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem --> /usr/share/ca-certificates/5324.pem (1338 bytes)
	I0416 16:56:24.933883   12816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 16:56:24.975091   12816 ssh_runner.go:195] Run: openssl version
	I0416 16:56:24.994129   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/53242.pem && ln -fs /usr/share/ca-certificates/53242.pem /etc/ssl/certs/53242.pem"
	I0416 16:56:25.021821   12816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/53242.pem
	I0416 16:56:25.028512   12816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:35 /usr/share/ca-certificates/53242.pem
	I0416 16:56:25.037989   12816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/53242.pem
	I0416 16:56:25.054924   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/53242.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 16:56:25.080011   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 16:56:25.106815   12816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:25.113980   12816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:25.126339   12816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:25.144599   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 16:56:25.170309   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5324.pem && ln -fs /usr/share/ca-certificates/5324.pem /etc/ssl/certs/5324.pem"
	I0416 16:56:25.199080   12816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5324.pem
	I0416 16:56:25.206080   12816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:35 /usr/share/ca-certificates/5324.pem
	I0416 16:56:25.214031   12816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5324.pem
	I0416 16:56:25.237026   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5324.pem /etc/ssl/certs/51391683.0"
	I0416 16:56:25.266837   12816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 16:56:25.273408   12816 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 16:56:25.273858   12816 kubeadm.go:391] StartCluster: {Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clu
sterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:56:25.281991   12816 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0416 16:56:25.314891   12816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0416 16:56:25.342248   12816 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 16:56:25.368032   12816 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 16:56:25.385737   12816 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 16:56:25.385737   12816 kubeadm.go:156] found existing configuration files:
	
	I0416 16:56:25.393851   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 16:56:25.410393   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 16:56:25.421874   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 16:56:25.453762   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 16:56:25.468769   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 16:56:25.477353   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 16:56:25.501898   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 16:56:25.515888   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 16:56:25.524885   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 16:56:25.548518   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 16:56:25.563660   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 16:56:25.572269   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 16:56:25.587981   12816 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 16:56:25.791977   12816 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 16:56:25.791977   12816 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 16:56:25.958638   12816 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 16:56:25.959035   12816 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 16:56:25.959403   12816 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 16:56:26.228464   12816 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 16:56:26.229544   12816 out.go:204]   - Generating certificates and keys ...
	I0416 16:56:26.229862   12816 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 16:56:26.230882   12816 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 16:56:26.359024   12816 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0416 16:56:26.583044   12816 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0416 16:56:26.715543   12816 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0416 16:56:27.014892   12816 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0416 16:56:27.414264   12816 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0416 16:56:27.414467   12816 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-022600 localhost] and IPs [172.19.81.207 127.0.0.1 ::1]
	I0416 16:56:27.642396   12816 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0416 16:56:27.642770   12816 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-022600 localhost] and IPs [172.19.81.207 127.0.0.1 ::1]
	I0416 16:56:27.844566   12816 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0416 16:56:28.089475   12816 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0416 16:56:28.543900   12816 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0416 16:56:28.548586   12816 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 16:56:29.051829   12816 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 16:56:29.485679   12816 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 16:56:29.830737   12816 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 16:56:30.055972   12816 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 16:56:30.305118   12816 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 16:56:30.305118   12816 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 16:56:30.310446   12816 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 16:56:30.311113   12816 out.go:204]   - Booting up control plane ...
	I0416 16:56:30.311289   12816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 16:56:30.311970   12816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 16:56:30.317049   12816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 16:56:30.342443   12816 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 16:56:30.345140   12816 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 16:56:30.345140   12816 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 16:56:30.526725   12816 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 16:56:37.142045   12816 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.615653 seconds
	I0416 16:56:37.159025   12816 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 16:56:37.175108   12816 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 16:56:37.707867   12816 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 16:56:37.708715   12816 kubeadm.go:309] [mark-control-plane] Marking the node ha-022600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 16:56:38.222729   12816 kubeadm.go:309] [bootstrap-token] Using token: a3r5qn.ikva200bfcppykg5
	I0416 16:56:38.223819   12816 out.go:204]   - Configuring RBAC rules ...
	I0416 16:56:38.224231   12816 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 16:56:38.232416   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 16:56:38.244982   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 16:56:38.249926   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 16:56:38.257723   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 16:56:38.262029   12816 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 16:56:38.279883   12816 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 16:56:38.592701   12816 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 16:56:38.638273   12816 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 16:56:38.639572   12816 kubeadm.go:309] 
	I0416 16:56:38.640154   12816 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 16:56:38.640230   12816 kubeadm.go:309] 
	I0416 16:56:38.640982   12816 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 16:56:38.641038   12816 kubeadm.go:309] 
	I0416 16:56:38.641299   12816 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 16:56:38.641581   12816 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 16:56:38.641765   12816 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 16:56:38.641765   12816 kubeadm.go:309] 
	I0416 16:56:38.641989   12816 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 16:56:38.642031   12816 kubeadm.go:309] 
	I0416 16:56:38.642184   12816 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 16:56:38.642228   12816 kubeadm.go:309] 
	I0416 16:56:38.642350   12816 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 16:56:38.642660   12816 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 16:56:38.642862   12816 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 16:56:38.642900   12816 kubeadm.go:309] 
	I0416 16:56:38.643166   12816 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 16:56:38.643426   12816 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 16:56:38.643426   12816 kubeadm.go:309] 
	I0416 16:56:38.643613   12816 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token a3r5qn.ikva200bfcppykg5 \
	I0416 16:56:38.643867   12816 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c \
	I0416 16:56:38.643909   12816 kubeadm.go:309] 	--control-plane 
	I0416 16:56:38.643961   12816 kubeadm.go:309] 
	I0416 16:56:38.644233   12816 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 16:56:38.644272   12816 kubeadm.go:309] 
	I0416 16:56:38.644444   12816 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token a3r5qn.ikva200bfcppykg5 \
	I0416 16:56:38.644734   12816 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c 
	I0416 16:56:38.647455   12816 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 16:56:38.647488   12816 cni.go:84] Creating CNI manager for ""
	I0416 16:56:38.647539   12816 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0416 16:56:38.648246   12816 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0416 16:56:38.657141   12816 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0416 16:56:38.671263   12816 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0416 16:56:38.671263   12816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0416 16:56:38.722410   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0416 16:56:39.265655   12816 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 16:56:39.279279   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:39.279279   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-022600 minikube.k8s.io/updated_at=2024_04_16T16_56_39_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4 minikube.k8s.io/name=ha-022600 minikube.k8s.io/primary=true
	I0416 16:56:39.290244   12816 ops.go:34] apiserver oom_adj: -16
	I0416 16:56:39.441163   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:39.950155   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:40.453751   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:40.955147   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:41.455931   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:41.953044   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:42.454696   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:42.949299   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:43.454962   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:43.953447   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:44.456402   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:44.956686   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:45.449476   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:45.951602   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:46.451988   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:46.949212   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:47.449356   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:47.950703   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:48.458777   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:48.956811   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:49.456669   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:49.943595   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:50.443906   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:50.950503   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:51.454863   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:51.944285   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:52.083562   12816 kubeadm.go:1107] duration metric: took 12.8170858s to wait for elevateKubeSystemPrivileges
	W0416 16:56:52.083816   12816 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 16:56:52.083816   12816 kubeadm.go:393] duration metric: took 26.808438s to StartCluster
	I0416 16:56:52.083816   12816 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:52.084214   12816 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:56:52.086643   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:52.088384   12816 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 16:56:52.088384   12816 start.go:240] waiting for startup goroutines ...
	I0416 16:56:52.088384   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0416 16:56:52.088384   12816 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 16:56:52.088630   12816 addons.go:69] Setting storage-provisioner=true in profile "ha-022600"
	I0416 16:56:52.088732   12816 addons.go:234] Setting addon storage-provisioner=true in "ha-022600"
	I0416 16:56:52.088732   12816 addons.go:69] Setting default-storageclass=true in profile "ha-022600"
	I0416 16:56:52.088850   12816 host.go:66] Checking if "ha-022600" exists ...
	I0416 16:56:52.088964   12816 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-022600"
	I0416 16:56:52.088964   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:56:52.090289   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:52.090671   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:52.207597   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.80.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0416 16:56:52.469504   12816 start.go:946] {"host.minikube.internal": 172.19.80.1} host record injected into CoreDNS's ConfigMap
	I0416 16:56:54.164683   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:54.164683   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:54.165583   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:54.165635   12816 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:56:54.165635   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:54.166734   12816 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 16:56:54.166340   12816 kapi.go:59] client config for ha-022600: &rest.Config{Host:"https://172.19.95.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-022600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-022600\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef16c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 16:56:54.167133   12816 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 16:56:54.167133   12816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 16:56:54.167133   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:54.167791   12816 cert_rotation.go:137] Starting client certificate rotation controller
	I0416 16:56:54.168180   12816 addons.go:234] Setting addon default-storageclass=true in "ha-022600"
	I0416 16:56:54.168347   12816 host.go:66] Checking if "ha-022600" exists ...
	I0416 16:56:54.169251   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:56.312581   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:56.312988   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:56.313046   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:56.313270   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:56.313270   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:56.313270   12816 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 16:56:56.313270   12816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 16:56:56.313270   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:58.330392   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:58.330392   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:58.330966   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:58.735727   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:56:58.735876   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:58.736103   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:56:58.898469   12816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 16:57:00.675802   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:57:00.675802   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:00.676245   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:57:00.828151   12816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 16:57:01.248041   12816 round_trippers.go:463] GET https://172.19.95.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0416 16:57:01.248041   12816 round_trippers.go:469] Request Headers:
	I0416 16:57:01.248041   12816 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:57:01.248041   12816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:57:01.261890   12816 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0416 16:57:01.262478   12816 round_trippers.go:463] PUT https://172.19.95.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0416 16:57:01.262478   12816 round_trippers.go:469] Request Headers:
	I0416 16:57:01.262478   12816 round_trippers.go:473]     Content-Type: application/json
	I0416 16:57:01.262478   12816 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:57:01.262478   12816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:57:01.268964   12816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 16:57:01.269995   12816 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0416 16:57:01.270495   12816 addons.go:505] duration metric: took 9.181591s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0416 16:57:01.270576   12816 start.go:245] waiting for cluster config update ...
	I0416 16:57:01.270618   12816 start.go:254] writing updated cluster config ...
	I0416 16:57:01.271859   12816 out.go:177] 
	I0416 16:57:01.284169   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:57:01.284169   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:57:01.285951   12816 out.go:177] * Starting "ha-022600-m02" control-plane node in "ha-022600" cluster
	I0416 16:57:01.286952   12816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:57:01.286952   12816 cache.go:56] Caching tarball of preloaded images
	I0416 16:57:01.286952   12816 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 16:57:01.286952   12816 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 16:57:01.286952   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:57:01.296247   12816 start.go:360] acquireMachinesLock for ha-022600-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:57:01.297324   12816 start.go:364] duration metric: took 1.0773ms to acquireMachinesLock for "ha-022600-m02"
	I0416 16:57:01.297559   12816 start.go:93] Provisioning new machine with config: &{Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 16:57:01.297559   12816 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0416 16:57:01.297559   12816 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 16:57:01.297559   12816 start.go:159] libmachine.API.Create for "ha-022600" (driver="hyperv")
	I0416 16:57:01.297559   12816 client.go:168] LocalClient.Create starting
	I0416 16:57:01.298838   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0416 16:57:01.299147   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:57:01.299147   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:57:01.299293   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0416 16:57:01.299468   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:57:01.299468   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:57:01.299468   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0416 16:57:03.017072   12816 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0416 16:57:03.017279   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:03.017366   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0416 16:57:04.580895   12816 main.go:141] libmachine: [stdout =====>] : False
	
	I0416 16:57:04.580895   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:04.580895   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:57:05.984295   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:57:05.984295   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:05.984295   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:57:09.314760   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:57:09.314760   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:09.316740   12816 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 16:57:09.669552   12816 main.go:141] libmachine: Creating SSH key...
	I0416 16:57:10.010472   12816 main.go:141] libmachine: Creating VM...
	I0416 16:57:10.010472   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:57:12.690022   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:57:12.690022   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:12.690107   12816 main.go:141] libmachine: Using switch "Default Switch"
	I0416 16:57:12.690185   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:57:14.267157   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:57:14.267157   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:14.267157   12816 main.go:141] libmachine: Creating VHD
	I0416 16:57:14.267157   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0416 16:57:17.749511   12816 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : FE960248-03C1-43D6-B7AE-A60D4C86308B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0416 16:57:17.749511   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:17.749511   12816 main.go:141] libmachine: Writing magic tar header
	I0416 16:57:17.749511   12816 main.go:141] libmachine: Writing SSH key tar header
	I0416 16:57:17.758158   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0416 16:57:20.709379   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:20.709950   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:20.710019   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\disk.vhd' -SizeBytes 20000MB
	I0416 16:57:23.025729   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:23.025729   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:23.025729   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-022600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0416 16:57:26.131923   12816 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-022600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0416 16:57:26.131923   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:26.131923   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-022600-m02 -DynamicMemoryEnabled $false
	I0416 16:57:28.159153   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:28.159229   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:28.159409   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-022600-m02 -Count 2
	I0416 16:57:30.126033   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:30.126033   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:30.126033   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-022600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\boot2docker.iso'
	I0416 16:57:32.420739   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:32.421735   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:32.421878   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-022600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\disk.vhd'
	I0416 16:57:34.779822   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:34.779822   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:34.779822   12816 main.go:141] libmachine: Starting VM...
	I0416 16:57:34.780971   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-022600-m02
	I0416 16:57:37.369505   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:37.369687   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:37.369687   12816 main.go:141] libmachine: Waiting for host to start...
	I0416 16:57:37.369767   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:39.415029   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:39.415029   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:39.415286   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:41.685132   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:41.685132   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:42.700464   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:44.674039   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:44.674039   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:44.674039   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:46.993492   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:46.993492   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:48.000886   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:49.992438   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:49.992894   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:49.992930   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:52.274971   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:52.274971   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:53.290891   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:55.287716   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:55.287962   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:55.288037   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:57.564053   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:57.564053   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:58.572803   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:00.584542   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:00.584542   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:00.584542   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:02.905327   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:02.905327   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:02.905391   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:04.899133   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:04.899133   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:04.899479   12816 machine.go:94] provisionDockerMachine start ...
	I0416 16:58:04.899479   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:06.914221   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:06.914221   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:06.914869   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:09.273511   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:09.273546   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:09.277783   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:09.278406   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:09.278406   12816 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 16:58:09.413281   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 16:58:09.413281   12816 buildroot.go:166] provisioning hostname "ha-022600-m02"
	I0416 16:58:09.413281   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:11.438626   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:11.438626   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:11.439079   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:13.801181   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:13.801181   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:13.805295   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:13.805684   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:13.805684   12816 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-022600-m02 && echo "ha-022600-m02" | sudo tee /etc/hostname
	I0416 16:58:13.957933   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-022600-m02
	
	I0416 16:58:13.958021   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:15.863768   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:15.863768   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:15.863768   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:18.176996   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:18.178002   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:18.182057   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:18.182681   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:18.182681   12816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-022600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-022600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-022600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:58:18.315751   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:58:18.315853   12816 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 16:58:18.315853   12816 buildroot.go:174] setting up certificates
	I0416 16:58:18.315853   12816 provision.go:84] configureAuth start
	I0416 16:58:18.315853   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:20.243862   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:20.243928   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:20.243928   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:22.525833   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:22.525945   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:22.526057   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:24.418671   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:24.418671   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:24.418894   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:26.735560   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:26.735560   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:26.735560   12816 provision.go:143] copyHostCerts
	I0416 16:58:26.736546   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 16:58:26.736627   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 16:58:26.736627   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 16:58:26.736627   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 16:58:26.737290   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 16:58:26.737900   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 16:58:26.737900   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 16:58:26.738191   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 16:58:26.738908   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 16:58:26.738977   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 16:58:26.738977   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 16:58:26.738977   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 16:58:26.739652   12816 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-022600-m02 san=[127.0.0.1 172.19.80.125 ha-022600-m02 localhost minikube]
	I0416 16:58:26.917277   12816 provision.go:177] copyRemoteCerts
	I0416 16:58:26.926308   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:58:26.926600   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:28.829360   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:28.829360   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:28.830343   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:31.113681   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:31.113681   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:31.113681   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:58:31.229222   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3026703s)
	I0416 16:58:31.229222   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 16:58:31.229700   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 16:58:31.279666   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 16:58:31.280307   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0416 16:58:31.328101   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 16:58:31.328245   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 16:58:31.382563   12816 provision.go:87] duration metric: took 13.065969s to configureAuth
	I0416 16:58:31.382563   12816 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:58:31.383343   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:58:31.383343   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:33.331199   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:33.331199   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:33.331275   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:35.653673   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:35.653721   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:35.656855   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:35.657430   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:35.657430   12816 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 16:58:35.781565   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 16:58:35.781565   12816 buildroot.go:70] root file system type: tmpfs
	I0416 16:58:35.781565   12816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 16:58:35.782090   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:37.695478   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:37.695478   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:37.696344   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:39.956169   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:39.956169   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:39.961057   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:39.961515   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:39.961564   12816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.81.207"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 16:58:40.123664   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.81.207
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 16:58:40.123818   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:42.064878   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:42.064974   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:42.065152   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:44.326252   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:44.326252   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:44.330103   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:44.330731   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:44.330731   12816 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 16:58:46.283136   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 16:58:46.283253   12816 machine.go:97] duration metric: took 41.3814279s to provisionDockerMachine
	I0416 16:58:46.283253   12816 client.go:171] duration metric: took 1m44.9797412s to LocalClient.Create
	I0416 16:58:46.283253   12816 start.go:167] duration metric: took 1m44.9797412s to libmachine.API.Create "ha-022600"
	I0416 16:58:46.283253   12816 start.go:293] postStartSetup for "ha-022600-m02" (driver="hyperv")
	I0416 16:58:46.283345   12816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:58:46.292724   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:58:46.292724   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:48.207625   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:48.207625   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:48.207625   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:50.480821   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:50.480821   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:50.480821   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:58:50.575284   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2823171s)
	I0416 16:58:50.584260   12816 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:58:50.591292   12816 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:58:50.591292   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 16:58:50.591292   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 16:58:50.591900   12816 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 16:58:50.591900   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 16:58:50.601073   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 16:58:50.618807   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 16:58:50.671301   12816 start.go:296] duration metric: took 4.3877068s for postStartSetup
	I0416 16:58:50.673161   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:52.621684   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:52.622252   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:52.622252   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:54.923435   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:54.923435   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:54.923763   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:58:54.926483   12816 start.go:128] duration metric: took 1m53.622481s to createHost
	I0416 16:58:54.926657   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:56.793105   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:56.793105   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:56.793184   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:59.024255   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:59.025184   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:59.029108   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:59.029633   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:59.029730   12816 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 16:58:59.149333   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713286739.315259098
	
	I0416 16:58:59.149333   12816 fix.go:216] guest clock: 1713286739.315259098
	I0416 16:58:59.149333   12816 fix.go:229] Guest: 2024-04-16 16:58:59.315259098 +0000 UTC Remote: 2024-04-16 16:58:54.9265716 +0000 UTC m=+304.925199701 (delta=4.388687498s)
	I0416 16:58:59.149333   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:01.054656   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:01.054831   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:01.054831   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:03.303195   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:03.303195   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:03.307071   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:59:03.307459   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:59:03.307531   12816 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713286739
	I0416 16:59:03.449024   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 16:58:59 UTC 2024
	
	I0416 16:59:03.449024   12816 fix.go:236] clock set: Tue Apr 16 16:58:59 UTC 2024
	 (err=<nil>)
	I0416 16:59:03.449024   12816 start.go:83] releasing machines lock for "ha-022600-m02", held for 2m2.1447745s
	I0416 16:59:03.450039   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:05.434998   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:05.434998   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:05.434998   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:07.737918   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:07.737918   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:07.739042   12816 out.go:177] * Found network options:
	I0416 16:59:07.739784   12816 out.go:177]   - NO_PROXY=172.19.81.207
	W0416 16:59:07.740404   12816 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 16:59:07.741027   12816 out.go:177]   - NO_PROXY=172.19.81.207
	W0416 16:59:07.741505   12816 proxy.go:119] fail to check proxy env: Error ip not in block
	W0416 16:59:07.742708   12816 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 16:59:07.744988   12816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:59:07.745153   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:07.752817   12816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0416 16:59:07.752817   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:09.758953   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:12.157582   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:12.158536   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:12.159044   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:59:12.184719   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:12.184719   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:12.185179   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:59:12.257436   12816 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5043642s)
	W0416 16:59:12.257436   12816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:59:12.266545   12816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:59:12.367206   12816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 16:59:12.367296   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:59:12.367330   12816 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6219642s)
	I0416 16:59:12.367330   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:59:12.423201   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 16:59:12.453988   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 16:59:12.472992   12816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 16:59:12.482991   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 16:59:12.510864   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:59:12.538866   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 16:59:12.565866   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:59:12.597751   12816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:59:12.622761   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 16:59:12.648905   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 16:59:12.674904   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 16:59:12.713452   12816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:59:12.741495   12816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:59:12.768497   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:59:12.975524   12816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 16:59:13.011635   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:59:13.023647   12816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 16:59:13.058146   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:59:13.091991   12816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 16:59:13.139058   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:59:13.173081   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:59:13.208242   12816 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 16:59:13.259511   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:59:13.282094   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:59:13.329081   12816 ssh_runner.go:195] Run: which cri-dockerd
	I0416 16:59:13.344832   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 16:59:13.362131   12816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 16:59:13.403377   12816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 16:59:13.597444   12816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 16:59:13.768147   12816 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 16:59:13.768278   12816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 16:59:13.808294   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:59:13.987216   12816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 17:00:15.104612   12816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1138396s)
	I0416 17:00:15.115049   12816 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0416 17:00:15.145752   12816 out.go:177] 
	W0416 17:00:15.146473   12816 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 16 16:58:45 ha-022600-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.076842920Z" level=info msg="Starting up"
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.077687177Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.078706068Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.109665355Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138411128Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138448735Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138508447Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138523049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138600164Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138632670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138848110Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138955930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139030244Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139045347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139142365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139433520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142495192Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142588309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142778845Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142795748Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143044695Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143174419Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143191422Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.152862930Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153144583Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153313214Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153337519Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153354522Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153467543Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153957434Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154159572Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154195179Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154212082Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154230586Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154258491Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154272393Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154287696Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154303599Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154317302Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154330504Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154344107Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154373612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154392516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154406618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154421121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154434024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154447526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154460128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154474031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154498536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154514539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154525841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154555046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154568249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154583952Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154604755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154629960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154642062Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154700973Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154916114Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155014532Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155030135Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155203567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155302486Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155325090Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155706861Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155796078Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155907599Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155947306Z" level=info msg="containerd successfully booted in 0.047582s"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.119001526Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.129323458Z" level=info msg="Loading containers: start."
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.358382320Z" level=info msg="Loading containers: done."
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.377033580Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.377149301Z" level=info msg="Daemon has completed initialization"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.447556885Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 16:58:46 ha-022600-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.449134569Z" level=info msg="API listen on [::]:2376"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.178053148Z" level=info msg="Processing signal 'terminated'"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.179830517Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 16:59:14 ha-022600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.180814055Z" level=info msg="Daemon shutdown complete"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.181020363Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.181054564Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 16:59:15 ha-022600-m02 dockerd[1019]: time="2024-04-16T16:59:15.248212596Z" level=info msg="Starting up"
	Apr 16 17:00:15 ha-022600-m02 dockerd[1019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0416 17:00:15.146611   12816 out.go:239] * 
	W0416 17:00:15.147806   12816 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 17:00:15.148383   12816 out.go:177] 
	
	
	==> Docker <==
	Apr 16 17:00:46 ha-022600 dockerd[1331]: time="2024-04-16T17:00:46.156267612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 17:00:46 ha-022600 cri-dockerd[1232]: time="2024-04-16T17:00:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8a4de3aa24af1283627968e3b5972a40e7430994e81c6c1dc2f08b918b9b3ce1/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 16 17:00:47 ha-022600 cri-dockerd[1232]: time="2024-04-16T17:00:47Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 16 17:00:47 ha-022600 dockerd[1331]: time="2024-04-16T17:00:47.477180079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 17:00:47 ha-022600 dockerd[1331]: time="2024-04-16T17:00:47.477294385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 17:00:47 ha-022600 dockerd[1331]: time="2024-04-16T17:00:47.477311186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 17:00:47 ha-022600 dockerd[1331]: time="2024-04-16T17:00:47.478281439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 17:13:05 ha-022600 dockerd[1325]: 2024/04/16 17:13:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:05 ha-022600 dockerd[1325]: 2024/04/16 17:13:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:05 ha-022600 dockerd[1325]: 2024/04/16 17:13:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:05 ha-022600 dockerd[1325]: 2024/04/16 17:13:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:05 ha-022600 dockerd[1325]: 2024/04/16 17:13:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:05 ha-022600 dockerd[1325]: 2024/04/16 17:13:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:05 ha-022600 dockerd[1325]: 2024/04/16 17:13:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:05 ha-022600 dockerd[1325]: 2024/04/16 17:13:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:05 ha-022600 dockerd[1325]: 2024/04/16 17:13:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:47 ha-022600 dockerd[1325]: 2024/04/16 17:13:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:47 ha-022600 dockerd[1325]: 2024/04/16 17:13:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:47 ha-022600 dockerd[1325]: 2024/04/16 17:13:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:47 ha-022600 dockerd[1325]: 2024/04/16 17:13:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:47 ha-022600 dockerd[1325]: 2024/04/16 17:13:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:47 ha-022600 dockerd[1325]: 2024/04/16 17:13:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:47 ha-022600 dockerd[1325]: 2024/04/16 17:13:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:47 ha-022600 dockerd[1325]: 2024/04/16 17:13:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:47 ha-022600 dockerd[1325]: 2024/04/16 17:13:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d38b1a5f4caa8       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   16 minutes ago      Running             busybox                   0                   8a4de3aa24af1       busybox-7fdf7869d9-rpfpf
	3fe545bfad4e6       cbb01a7bd410d                                                                                         20 minutes ago      Running             coredns                   0                   093278b3840ef       coredns-76f75df574-qm89x
	979dee88be2b4       cbb01a7bd410d                                                                                         20 minutes ago      Running             coredns                   0                   4ad38b0d59335       coredns-76f75df574-ww2r6
	257879ecf06b2       6e38f40d628db                                                                                         20 minutes ago      Running             storage-provisioner       0                   bf991c3e34e2d       storage-provisioner
	be245de9ef545       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              20 minutes ago      Running             kindnet-cni               0                   92c35b3fd0967       kindnet-mwqvl
	05db92f49e0df       a1d263b5dc5b0                                                                                         20 minutes ago      Running             kube-proxy                0                   12380f49c1509       kube-proxy-2vddt
	d1ba82cd26254       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     21 minutes ago      Running             kube-vip                  0                   fa2c75c4c8d59       kube-vip-ha-022600
	a7fb69539df62       6052a25da3f97                                                                                         21 minutes ago      Running             kube-controller-manager   0                   b536621e20d4b       kube-controller-manager-ha-022600
	4fd5df8c9fd37       39f995c9f1996                                                                                         21 minutes ago      Running             kube-apiserver            0                   5a7a1e80caeb4       kube-apiserver-ha-022600
	e042d71e8b0e8       8c390d98f50c0                                                                                         21 minutes ago      Running             kube-scheduler            0                   5a2551c91a1b6       kube-scheduler-ha-022600
	c29b0762ff0bf       3861cfcd7c04c                                                                                         21 minutes ago      Running             etcd                      0                   c8a9aa3126cf5       etcd-ha-022600
	
	
	==> coredns [3fe545bfad4e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = db872c9fdf31f8d8ff61123f2a1e38a38b951fa043b9e42cdb76f01d23889e560885a7bdef735e757fd28e65f13e44b1d5d7b5def31861f6a98cd0279fbc18c8
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:55981 - 4765 "HINFO IN 3735046377920793891.8143170502200932773. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.058936595s
	[INFO] 10.244.0.4:43350 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000388921s
	[INFO] 10.244.0.4:35317 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.052221997s
	[INFO] 10.244.0.4:52074 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.040069369s
	[INFO] 10.244.0.4:49068 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.053312593s
	[INFO] 10.244.0.4:54711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123507s
	[INFO] 10.244.0.4:44694 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037006811s
	[INFO] 10.244.0.4:33399 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124606s
	[INFO] 10.244.0.4:37329 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000241612s
	[INFO] 10.244.0.4:57333 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131407s
	[INFO] 10.244.0.4:38806 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060403s
	[INFO] 10.244.0.4:48948 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000263914s
	[INFO] 10.244.0.4:51825 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000177309s
	[INFO] 10.244.0.4:53272 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00018301s
	
	
	==> coredns [979dee88be2b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = db872c9fdf31f8d8ff61123f2a1e38a38b951fa043b9e42cdb76f01d23889e560885a7bdef735e757fd28e65f13e44b1d5d7b5def31861f6a98cd0279fbc18c8
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50127 - 24072 "HINFO IN 7665836187497317301.2285362183679153792. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027543487s
	[INFO] 10.244.0.4:34822 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000224011s
	[INFO] 10.244.0.4:48911 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000349218s
	[INFO] 10.244.0.4:43369 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.023699624s
	[INFO] 10.244.0.4:56309 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000258914s
	[INFO] 10.244.0.4:36791 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.003463479s
	[INFO] 10.244.0.4:55996 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000301816s
	[INFO] 10.244.0.4:35967 - 5 "PTR IN 1.80.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000116506s
	
	
	==> describe nodes <==
	Name:               ha-022600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-022600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=ha-022600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T16_56_39_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 16:56:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-022600
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 17:17:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 17:16:34 +0000   Tue, 16 Apr 2024 16:56:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 17:16:34 +0000   Tue, 16 Apr 2024 16:56:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 17:16:34 +0000   Tue, 16 Apr 2024 16:56:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 17:16:34 +0000   Tue, 16 Apr 2024 16:57:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.81.207
	  Hostname:    ha-022600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b4674338fa494bbcb2e21e2b4385c5e1
	  System UUID:                201025fc-0c03-cc49-a194-29d6500971a2
	  Boot ID:                    6ae5bedd-6e8e-4f58-b08c-8e9912fd04de
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-rpfpf             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 coredns-76f75df574-qm89x             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 coredns-76f75df574-ww2r6             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 etcd-ha-022600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kindnet-mwqvl                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      20m
	  kube-system                 kube-apiserver-ha-022600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-ha-022600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-2vddt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-scheduler-ha-022600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-vip-ha-022600                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 20m   kube-proxy       
	  Normal  Starting                 21m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m   kubelet          Node ha-022600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m   kubelet          Node ha-022600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m   kubelet          Node ha-022600 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           20m   node-controller  Node ha-022600 event: Registered Node ha-022600 in Controller
	  Normal  NodeReady                20m   kubelet          Node ha-022600 status is now: NodeReady
	
	
	Name:               ha-022600-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-022600-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=ha-022600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_16T17_16_38_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 17:16:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-022600-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 17:17:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 17:17:08 +0000   Tue, 16 Apr 2024 17:16:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 17:17:08 +0000   Tue, 16 Apr 2024 17:16:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 17:17:08 +0000   Tue, 16 Apr 2024 17:16:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 17:17:08 +0000   Tue, 16 Apr 2024 17:16:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.93.94
	  Hostname:    ha-022600-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 cefa169b716045589e59382d0939ad48
	  System UUID:                25782c5b-4e02-0547-b063-db6b9c5f1f5b
	  Boot ID:                    e7c67d41-aa2d-47a1-952b-fa7ff5422e05
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7c2px       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      70s
	  kube-system                 kube-proxy-ss5lp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 59s                kube-proxy       
	  Normal  Starting                 70s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  70s (x2 over 70s)  kubelet          Node ha-022600-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    70s (x2 over 70s)  kubelet          Node ha-022600-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     70s (x2 over 70s)  kubelet          Node ha-022600-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  70s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           66s                node-controller  Node ha-022600-m03 event: Registered Node ha-022600-m03 in Controller
	  Normal  NodeReady                53s                kubelet          Node ha-022600-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.656516] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr16 16:55] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.165290] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[Apr16 16:56] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.091843] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.493988] systemd-fstab-generator[987]: Ignoring "noauto" option for root device
	[  +0.172637] systemd-fstab-generator[999]: Ignoring "noauto" option for root device
	[  +0.230010] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	[  +2.695048] systemd-fstab-generator[1185]: Ignoring "noauto" option for root device
	[  +0.219400] systemd-fstab-generator[1197]: Ignoring "noauto" option for root device
	[  +0.196554] systemd-fstab-generator[1209]: Ignoring "noauto" option for root device
	[  +0.267217] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
	[ +11.053282] systemd-fstab-generator[1317]: Ignoring "noauto" option for root device
	[  +0.095458] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.012264] systemd-fstab-generator[1521]: Ignoring "noauto" option for root device
	[  +6.758798] systemd-fstab-generator[1720]: Ignoring "noauto" option for root device
	[  +0.093227] kauditd_printk_skb: 73 callbacks suppressed
	[  +7.850543] systemd-fstab-generator[2213]: Ignoring "noauto" option for root device
	[  +0.130310] kauditd_printk_skb: 72 callbacks suppressed
	[ +15.381320] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.386371] kauditd_printk_skb: 29 callbacks suppressed
	[Apr16 17:00] hrtimer: interrupt took 5042261 ns
	[  +0.908827] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [c29b0762ff0b] <==
	{"level":"info","ts":"2024-04-16T17:06:33.350784Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":964}
	{"level":"info","ts":"2024-04-16T17:06:33.393755Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":964,"took":"42.49244ms","hash":1730924367,"current-db-size-bytes":2433024,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2433024,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-04-16T17:06:33.395361Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1730924367,"revision":964,"compact-revision":-1}
	{"level":"info","ts":"2024-04-16T17:11:33.360995Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1502}
	{"level":"info","ts":"2024-04-16T17:11:33.366072Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1502,"took":"4.116913ms","hash":127222243,"current-db-size-bytes":2433024,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1818624,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-04-16T17:11:33.366162Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":127222243,"revision":1502,"compact-revision":964}
	{"level":"info","ts":"2024-04-16T17:15:11.421098Z","caller":"traceutil/trace.go:171","msg":"trace[1208553513] transaction","detail":"{read_only:false; response_revision:2431; number_of_response:1; }","duration":"155.410586ms","start":"2024-04-16T17:15:11.265667Z","end":"2024-04-16T17:15:11.421077Z","steps":["trace[1208553513] 'process raft request'  (duration: 155.135072ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:16:31.529032Z","caller":"traceutil/trace.go:171","msg":"trace[505251683] linearizableReadLoop","detail":"{readStateIndex:2832; appliedIndex:2831; }","duration":"107.445309ms","start":"2024-04-16T17:16:31.421572Z","end":"2024-04-16T17:16:31.529017Z","steps":["trace[505251683] 'read index received'  (duration: 107.319103ms)","trace[505251683] 'applied index is now lower than readState.Index'  (duration: 125.606µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-16T17:16:31.529184Z","caller":"traceutil/trace.go:171","msg":"trace[359290184] transaction","detail":"{read_only:false; response_revision:2575; number_of_response:1; }","duration":"197.441024ms","start":"2024-04-16T17:16:31.331735Z","end":"2024-04-16T17:16:31.529176Z","steps":["trace[359290184] 'process raft request'  (duration: 197.196912ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:16:31.529431Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.83703ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1110"}
	{"level":"info","ts":"2024-04-16T17:16:31.52969Z","caller":"traceutil/trace.go:171","msg":"trace[1576069612] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2575; }","duration":"108.130545ms","start":"2024-04-16T17:16:31.421545Z","end":"2024-04-16T17:16:31.529676Z","steps":["trace[1576069612] 'agreement among raft nodes before linearized reading'  (duration: 107.801628ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:16:33.727834Z","caller":"traceutil/trace.go:171","msg":"trace[1449824028] transaction","detail":"{read_only:false; response_revision:2578; number_of_response:1; }","duration":"364.497189ms","start":"2024-04-16T17:16:33.363317Z","end":"2024-04-16T17:16:33.727815Z","steps":["trace[1449824028] 'process raft request'  (duration: 364.339681ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:16:33.729115Z","caller":"traceutil/trace.go:171","msg":"trace[948704194] linearizableReadLoop","detail":"{readStateIndex:2837; appliedIndex:2836; }","duration":"283.56914ms","start":"2024-04-16T17:16:33.445533Z","end":"2024-04-16T17:16:33.729102Z","steps":["trace[948704194] 'read index received'  (duration: 282.906606ms)","trace[948704194] 'applied index is now lower than readState.Index'  (duration: 662.034µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-16T17:16:33.72965Z","caller":"traceutil/trace.go:171","msg":"trace[1908879286] transaction","detail":"{read_only:false; response_revision:2579; number_of_response:1; }","duration":"291.495046ms","start":"2024-04-16T17:16:33.438143Z","end":"2024-04-16T17:16:33.729638Z","steps":["trace[1908879286] 'process raft request'  (duration: 290.677204ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:16:33.729668Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-16T17:16:33.363297Z","time spent":"364.643596ms","remote":"127.0.0.1:49456","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":25,"response count":0,"response size":38,"request content":"compare:<key:\"compact_rev_key\" version:3 > success:<request_put:<key:\"compact_rev_key\" value_size:4 >> failure:<request_range:<key:\"compact_rev_key\" > >"}
	{"level":"warn","ts":"2024-04-16T17:16:33.729962Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.040139ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:434"}
	{"level":"info","ts":"2024-04-16T17:16:33.73064Z","caller":"traceutil/trace.go:171","msg":"trace[1591257630] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:2579; }","duration":"186.677072ms","start":"2024-04-16T17:16:33.543885Z","end":"2024-04-16T17:16:33.730562Z","steps":["trace[1591257630] 'agreement among raft nodes before linearized reading'  (duration: 185.842129ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:16:33.730022Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"284.488987ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-16T17:16:33.731097Z","caller":"traceutil/trace.go:171","msg":"trace[339406949] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2579; }","duration":"285.581443ms","start":"2024-04-16T17:16:33.445505Z","end":"2024-04-16T17:16:33.731087Z","steps":["trace[339406949] 'agreement among raft nodes before linearized reading'  (duration: 284.501387ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:16:33.730066Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.750168ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1110"}
	{"level":"info","ts":"2024-04-16T17:16:33.731323Z","caller":"traceutil/trace.go:171","msg":"trace[1323315847] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2579; }","duration":"143.028733ms","start":"2024-04-16T17:16:33.588284Z","end":"2024-04-16T17:16:33.731313Z","steps":["trace[1323315847] 'agreement among raft nodes before linearized reading'  (duration: 141.746268ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:16:33.740796Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2041}
	{"level":"info","ts":"2024-04-16T17:16:33.745817Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2041,"took":"4.568334ms","hash":1427640317,"current-db-size-bytes":2433024,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1806336,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-04-16T17:16:33.746025Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1427640317,"revision":2041,"compact-revision":1502}
	{"level":"info","ts":"2024-04-16T17:16:40.98492Z","caller":"traceutil/trace.go:171","msg":"trace[2045382782] transaction","detail":"{read_only:false; response_revision:2627; number_of_response:1; }","duration":"150.576419ms","start":"2024-04-16T17:16:40.834317Z","end":"2024-04-16T17:16:40.984893Z","steps":["trace[2045382782] 'process raft request'  (duration: 150.385009ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:17:47 up 23 min,  0 users,  load average: 0.27, 0.26, 0.20
	Linux ha-022600 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [be245de9ef54] <==
	I0416 17:16:41.541002       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.19.93.94 Flags: [] Table: 0} 
	I0416 17:16:51.547400       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:16:51.547616       1 main.go:227] handling current node
	I0416 17:16:51.547649       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:16:51.547672       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:17:01.553328       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:17:01.553365       1 main.go:227] handling current node
	I0416 17:17:01.553375       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:17:01.553381       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:17:11.565589       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:17:11.565683       1 main.go:227] handling current node
	I0416 17:17:11.565697       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:17:11.565705       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:17:21.576327       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:17:21.576373       1 main.go:227] handling current node
	I0416 17:17:21.576385       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:17:21.576392       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:17:31.591549       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:17:31.591646       1 main.go:227] handling current node
	I0416 17:17:31.591661       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:17:31.591669       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:17:41.598001       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:17:41.598040       1 main.go:227] handling current node
	I0416 17:17:41.598052       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:17:41.598058       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [4fd5df8c9fd3] <==
	I0416 16:56:35.510308       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0416 16:56:35.512679       1 aggregator.go:165] initial CRD sync complete...
	I0416 16:56:35.516211       1 autoregister_controller.go:141] Starting autoregister controller
	I0416 16:56:35.516249       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0416 16:56:35.516256       1 cache.go:39] Caches are synced for autoregister controller
	I0416 16:56:35.517473       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0416 16:56:35.522352       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0416 16:56:35.529558       1 controller.go:624] quota admission added evaluator for: namespaces
	I0416 16:56:35.542494       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0416 16:56:36.411016       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0416 16:56:36.418409       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0416 16:56:36.419376       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0416 16:56:37.172553       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0416 16:56:37.235069       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0416 16:56:37.370838       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0416 16:56:37.381797       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.81.207]
	I0416 16:56:37.383264       1 controller.go:624] quota admission added evaluator for: endpoints
	I0416 16:56:37.388718       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0416 16:56:37.435733       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0416 16:56:38.737496       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0416 16:56:38.764389       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0416 16:56:38.781093       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0416 16:56:51.466047       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0416 16:56:51.868826       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	http2: server: error reading preface from client 172.19.93.94:54156: read tcp 172.19.95.254:8443->172.19.93.94:54156: read: connection reset by peer
	
	
	==> kube-controller-manager [a7fb69539df6] <==
	I0416 16:57:04.995404       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="188.309µs"
	I0416 16:57:05.057328       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="96.005µs"
	I0416 16:57:05.964586       1 node_lifecycle_controller.go:1045] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0416 16:57:07.181900       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="167.009µs"
	I0416 16:57:07.224163       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="15.307781ms"
	I0416 16:57:07.224903       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="88.905µs"
	I0416 16:57:07.277301       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="19.898845ms"
	I0416 16:57:07.277810       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="53.303µs"
	I0416 17:00:45.709324       1 event.go:376] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7fdf7869d9 to 3"
	I0416 17:00:45.728545       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-rpfpf"
	I0416 17:00:45.745464       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-mnl84"
	I0416 17:00:45.756444       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-gph6r"
	I0416 17:00:45.770175       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="62.082711ms"
	I0416 17:00:45.784213       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="13.744211ms"
	I0416 17:00:45.810992       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="26.530372ms"
	I0416 17:00:45.811146       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="42.802µs"
	I0416 17:00:48.413892       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="8.465463ms"
	I0416 17:00:48.413981       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="40.302µs"
	I0416 17:16:37.436480       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-022600-m03\" does not exist"
	I0416 17:16:37.446130       1 range_allocator.go:380] "Set node PodCIDR" node="ha-022600-m03" podCIDRs=["10.244.1.0/24"]
	I0416 17:16:37.459239       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7c2px"
	I0416 17:16:37.461522       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ss5lp"
	I0416 17:16:41.186805       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-022600-m03"
	I0416 17:16:41.187824       1 event.go:376] "Event occurred" object="ha-022600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-022600-m03 event: Registered Node ha-022600-m03 in Controller"
	I0416 17:16:54.835196       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-022600-m03"
	
	
	==> kube-proxy [05db92f49e0d] <==
	I0416 16:56:54.468581       1 server_others.go:72] "Using iptables proxy"
	I0416 16:56:54.505964       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.81.207"]
	I0416 16:56:54.583838       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 16:56:54.584172       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 16:56:54.584273       1 server_others.go:168] "Using iptables Proxier"
	I0416 16:56:54.590060       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 16:56:54.590806       1 server.go:865] "Version info" version="v1.29.3"
	I0416 16:56:54.591014       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 16:56:54.592331       1 config.go:188] "Starting service config controller"
	I0416 16:56:54.592517       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 16:56:54.592625       1 config.go:97] "Starting endpoint slice config controller"
	I0416 16:56:54.592689       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 16:56:54.594058       1 config.go:315] "Starting node config controller"
	I0416 16:56:54.594215       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 16:56:54.693900       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 16:56:54.693964       1 shared_informer.go:318] Caches are synced for service config
	I0416 16:56:54.694328       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [e042d71e8b0e] <==
	W0416 16:56:36.501819       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0416 16:56:36.501922       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0416 16:56:36.507709       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0416 16:56:36.507948       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0416 16:56:36.573671       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 16:56:36.573877       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 16:56:36.602162       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 16:56:36.602205       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 16:56:36.621966       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.622272       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0416 16:56:36.648392       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0416 16:56:36.648623       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0416 16:56:36.694872       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 16:56:36.694970       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 16:56:36.804118       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.805424       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 16:56:36.821863       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.822231       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 16:56:36.866017       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 16:56:36.866298       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0416 16:56:36.904820       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 16:56:36.905097       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 16:56:36.917996       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.918036       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0416 16:56:39.298679       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 17:13:38 ha-022600 kubelet[2220]: E0416 17:13:38.996896    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:13:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:13:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:13:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:13:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:14:38 ha-022600 kubelet[2220]: E0416 17:14:38.994207    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:14:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:14:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:14:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:14:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:15:38 ha-022600 kubelet[2220]: E0416 17:15:38.994251    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:15:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:15:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:15:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:15:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:16:38 ha-022600 kubelet[2220]: E0416 17:16:38.994203    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:16:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:16:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:16:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:16:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:17:38 ha-022600 kubelet[2220]: E0416 17:17:38.995310    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:17:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:17:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:17:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:17:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 17:17:39.811080    8096 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-022600 -n ha-022600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-022600 -n ha-022600: (10.9511478s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-022600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7fdf7869d9-gph6r busybox-7fdf7869d9-mnl84
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-022600 describe pod busybox-7fdf7869d9-gph6r busybox-7fdf7869d9-mnl84
helpers_test.go:282: (dbg) kubectl --context ha-022600 describe pod busybox-7fdf7869d9-gph6r busybox-7fdf7869d9-mnl84:

                                                
                                                
-- stdout --
	Name:             busybox-7fdf7869d9-gph6r
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7fdf7869d9
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7fdf7869d9
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h29q5 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-h29q5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  108s (x4 over 17m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	
	
	Name:             busybox-7fdf7869d9-mnl84
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7fdf7869d9
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7fdf7869d9
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xhwqb (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-xhwqb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  108s (x4 over 17m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (239.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (47.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (17.2183602s)
ha_test.go:304: expected profile "ha-022600" in json of 'profile list' to include 4 nodes but have 3 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-022600\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-022600\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperv\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-022600\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"172.19.95.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"172.19.81.207\",\"Port\":8443,\"KubernetesVersion\
":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"172.19.80.125\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"172.19.93.94\",\"Port\":0,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":f
alse,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"C:\\\\Users\\\\jenkins.minikube5:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"Disab
leMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-windows-amd64.exe profile list --output json"
ha_test.go:307: expected profile "ha-022600" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-022600\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-022600\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperv\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1
,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-022600\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"172.19.95.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"172.19.81.207\",\"Port\":8443,\"Kuber
netesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"172.19.80.125\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"172.19.93.94\",\"Port\":0,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":fa
lse,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"C:\\\\Users\\\\jenkins.minikube5:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":
false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-windows-amd64.exe profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-022600 -n ha-022600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-022600 -n ha-022600: (10.8433605s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterClusterStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-022600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-022600 logs -n 25: (7.3383681s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterClusterStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:10 UTC | 16 Apr 24 17:10 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:10 UTC | 16 Apr 24 17:10 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:10 UTC | 16 Apr 24 17:10 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:11 UTC | 16 Apr 24 17:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:11 UTC | 16 Apr 24 17:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:11 UTC | 16 Apr 24 17:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:11 UTC | 16 Apr 24 17:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-gph6r --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-mnl84 --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | busybox-7fdf7869d9-rpfpf --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-gph6r --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-mnl84 --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | busybox-7fdf7869d9-rpfpf --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-gph6r -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-mnl84 -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | busybox-7fdf7869d9-rpfpf -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC | 16 Apr 24 17:13 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC |                     |
	|         | busybox-7fdf7869d9-gph6r             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC |                     |
	|         | busybox-7fdf7869d9-mnl84             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC | 16 Apr 24 17:13 UTC |
	|         | busybox-7fdf7869d9-rpfpf             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC |                     |
	|         | busybox-7fdf7869d9-rpfpf -- sh       |           |                   |                |                     |                     |
	|         | -c ping -c 1 172.19.80.1             |           |                   |                |                     |                     |
	| node    | add -p ha-022600 -v=7                | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC | 16 Apr 24 17:16 UTC |
	|         | --alsologtostderr                    |           |                   |                |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 16:53:50
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 16:53:50.116950   12816 out.go:291] Setting OutFile to fd 784 ...
	I0416 16:53:50.117952   12816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:53:50.117952   12816 out.go:304] Setting ErrFile to fd 696...
	I0416 16:53:50.117952   12816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:53:50.138920   12816 out.go:298] Setting JSON to false
	I0416 16:53:50.141501   12816 start.go:129] hostinfo: {"hostname":"minikube5","uptime":24059,"bootTime":1713262370,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0416 16:53:50.141501   12816 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0416 16:53:50.143700   12816 out.go:177] * [ha-022600] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0416 16:53:50.144387   12816 notify.go:220] Checking for updates...
	I0416 16:53:50.144982   12816 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:53:50.145881   12816 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 16:53:50.146373   12816 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0416 16:53:50.146987   12816 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 16:53:50.147788   12816 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 16:53:50.149250   12816 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 16:53:54.959514   12816 out.go:177] * Using the hyperv driver based on user configuration
	I0416 16:53:54.959811   12816 start.go:297] selected driver: hyperv
	I0416 16:53:54.959811   12816 start.go:901] validating driver "hyperv" against <nil>
	I0416 16:53:54.959811   12816 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 16:53:55.002641   12816 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 16:53:55.003374   12816 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 16:53:55.003816   12816 cni.go:84] Creating CNI manager for ""
	I0416 16:53:55.003816   12816 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0416 16:53:55.003816   12816 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0416 16:53:55.003816   12816 start.go:340] cluster config:
	{Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:53:55.003816   12816 iso.go:125] acquiring lock: {Name:mka3f8eef32f5becd06d05d1d837c2a92a8fa70c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 16:53:55.005700   12816 out.go:177] * Starting "ha-022600" primary control-plane node in "ha-022600" cluster
	I0416 16:53:55.006053   12816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:53:55.006397   12816 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0416 16:53:55.006397   12816 cache.go:56] Caching tarball of preloaded images
	I0416 16:53:55.006539   12816 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 16:53:55.006809   12816 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 16:53:55.007075   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:53:55.007821   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json: {Name:mkc2f9747189bfa0db5ea21e93e1afafc0e89eb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:53:55.008149   12816 start.go:360] acquireMachinesLock for ha-022600: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:53:55.009151   12816 start.go:364] duration metric: took 1.0024ms to acquireMachinesLock for "ha-022600"
	I0416 16:53:55.009151   12816 start.go:93] Provisioning new machine with config: &{Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 16:53:55.009151   12816 start.go:125] createHost starting for "" (driver="hyperv")
	I0416 16:53:55.010175   12816 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 16:53:55.010397   12816 start.go:159] libmachine.API.Create for "ha-022600" (driver="hyperv")
	I0416 16:53:55.010397   12816 client.go:168] LocalClient.Create starting
	I0416 16:53:55.010740   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0416 16:53:55.011023   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:53:55.011023   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:53:55.011200   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0416 16:53:55.011403   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:53:55.011403   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:53:55.011541   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0416 16:53:56.852843   12816 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0416 16:53:56.852843   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:53:56.853713   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0416 16:53:58.346838   12816 main.go:141] libmachine: [stdout =====>] : False
	
	I0416 16:53:58.346838   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:53:58.347399   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:53:59.667129   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:53:59.667129   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:53:59.667644   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:54:02.789332   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:54:02.789332   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:02.791736   12816 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 16:54:03.131710   12816 main.go:141] libmachine: Creating SSH key...
	I0416 16:54:03.273248   12816 main.go:141] libmachine: Creating VM...
	I0416 16:54:03.273248   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:54:05.824835   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:54:05.824835   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:05.824937   12816 main.go:141] libmachine: Using switch "Default Switch"
	I0416 16:54:05.825022   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:54:07.398351   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:54:07.398635   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:07.398635   12816 main.go:141] libmachine: Creating VHD
	I0416 16:54:07.398733   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0416 16:54:10.982944   12816 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : E9EB5342-E929-43B6-8B97-D7BDD354CEE1
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0416 16:54:10.983213   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:10.983213   12816 main.go:141] libmachine: Writing magic tar header
	I0416 16:54:10.983213   12816 main.go:141] libmachine: Writing SSH key tar header
	I0416 16:54:10.992883   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0416 16:54:13.950584   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:13.950584   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:13.950584   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\disk.vhd' -SizeBytes 20000MB
	I0416 16:54:16.287736   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:16.287736   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:16.287736   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-022600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0416 16:54:19.439740   12816 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-022600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0416 16:54:19.439740   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:19.439740   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-022600 -DynamicMemoryEnabled $false
	I0416 16:54:21.396684   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:21.396684   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:21.397696   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-022600 -Count 2
	I0416 16:54:23.301369   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:23.301369   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:23.302296   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-022600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\boot2docker.iso'
	I0416 16:54:25.540957   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:25.540957   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:25.541060   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-022600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\disk.vhd'
	I0416 16:54:27.919093   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:27.919302   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:27.919302   12816 main.go:141] libmachine: Starting VM...
	I0416 16:54:27.919462   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-022600
	I0416 16:54:30.480037   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:30.480279   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:30.480279   12816 main.go:141] libmachine: Waiting for host to start...
	I0416 16:54:30.480279   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:32.483346   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:32.484152   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:32.484152   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:34.785082   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:34.785271   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:35.799483   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:37.788691   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:37.788691   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:37.788898   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:40.058231   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:40.058742   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:41.064074   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:43.063862   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:43.063862   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:43.063862   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:45.301253   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:45.301420   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:46.309647   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:48.337653   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:48.337653   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:48.337653   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:50.614494   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:50.615195   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:51.620909   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:53.639317   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:53.640351   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:53.640405   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:55.942630   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:54:55.943393   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:55.943471   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:57.836545   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:57.836545   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:57.837395   12816 machine.go:94] provisionDockerMachine start ...
	I0416 16:54:57.837474   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:59.762683   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:59.763360   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:59.763440   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:02.003751   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:02.003751   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:02.010689   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:02.023158   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:02.023158   12816 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 16:55:02.152140   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 16:55:02.152244   12816 buildroot.go:166] provisioning hostname "ha-022600"
	I0416 16:55:02.152322   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:03.956913   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:03.956913   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:03.957618   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:06.305236   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:06.305236   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:06.309822   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:06.310484   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:06.310484   12816 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-022600 && echo "ha-022600" | sudo tee /etc/hostname
	I0416 16:55:06.479074   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-022600
	
	I0416 16:55:06.479182   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:08.433073   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:08.433999   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:08.433999   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:10.792893   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:10.792893   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:10.796713   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:10.797321   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:10.797321   12816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-022600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-022600/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-022600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:55:10.944702   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:55:10.944870   12816 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 16:55:10.944983   12816 buildroot.go:174] setting up certificates
	I0416 16:55:10.944983   12816 provision.go:84] configureAuth start
	I0416 16:55:10.945092   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:12.932736   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:12.932736   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:12.933614   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:15.203758   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:15.203758   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:15.203758   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:17.088226   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:17.088226   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:17.088334   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:19.325791   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:19.326294   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:19.326294   12816 provision.go:143] copyHostCerts
	I0416 16:55:19.326294   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 16:55:19.326294   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 16:55:19.326294   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 16:55:19.326900   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 16:55:19.328097   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 16:55:19.328097   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 16:55:19.328097   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 16:55:19.328097   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 16:55:19.329417   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 16:55:19.329417   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 16:55:19.329417   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 16:55:19.330063   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 16:55:19.330726   12816 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-022600 san=[127.0.0.1 172.19.81.207 ha-022600 localhost minikube]
	I0416 16:55:19.539117   12816 provision.go:177] copyRemoteCerts
	I0416 16:55:19.547114   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:55:19.547114   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:21.440985   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:21.440985   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:21.440985   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:23.726564   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:23.726564   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:23.727019   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:55:23.834423   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.287066s)
	I0416 16:55:23.834577   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 16:55:23.835008   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 16:55:23.874966   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 16:55:23.875470   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0416 16:55:23.923921   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 16:55:23.923921   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 16:55:23.965042   12816 provision.go:87] duration metric: took 13.0192422s to configureAuth
	I0416 16:55:23.965042   12816 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:55:23.965741   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:55:23.965827   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:25.905339   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:25.905903   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:25.905986   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:28.170079   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:28.170419   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:28.173356   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:28.173937   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:28.173937   12816 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 16:55:28.301727   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 16:55:28.301727   12816 buildroot.go:70] root file system type: tmpfs
	I0416 16:55:28.302425   12816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 16:55:28.302506   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:30.181808   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:30.181808   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:30.181889   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:32.394860   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:32.394860   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:32.398667   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:32.399299   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:32.399475   12816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 16:55:32.556658   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 16:55:32.556887   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:34.446928   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:34.446969   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:34.447053   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:36.709442   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:36.710242   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:36.714111   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:36.714437   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:36.714437   12816 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 16:55:38.655929   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 16:55:38.655929   12816 machine.go:97] duration metric: took 40.8162201s to provisionDockerMachine
	I0416 16:55:38.656036   12816 client.go:171] duration metric: took 1m43.6397622s to LocalClient.Create
	I0416 16:55:38.656036   12816 start.go:167] duration metric: took 1m43.6397622s to libmachine.API.Create "ha-022600"
	I0416 16:55:38.656036   12816 start.go:293] postStartSetup for "ha-022600" (driver="hyperv")
	I0416 16:55:38.656036   12816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:55:38.665072   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:55:38.665072   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:40.514910   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:40.514910   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:40.515910   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:42.764754   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:42.765404   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:42.765404   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:55:42.879399   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2140881s)
	I0416 16:55:42.892410   12816 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:55:42.899117   12816 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:55:42.899117   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 16:55:42.899734   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 16:55:42.901086   12816 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 16:55:42.901154   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 16:55:42.911237   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 16:55:42.927664   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 16:55:42.975440   12816 start.go:296] duration metric: took 4.3191592s for postStartSetup
	I0416 16:55:42.977201   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:44.830945   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:44.830945   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:44.831562   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:47.134349   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:47.134788   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:47.135000   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:55:47.137270   12816 start.go:128] duration metric: took 1m52.1217609s to createHost
	I0416 16:55:47.137270   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:49.024055   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:49.024055   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:49.024657   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:51.238446   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:51.238526   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:51.242455   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:51.243052   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:51.243052   12816 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 16:55:51.369469   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713286551.540248133
	
	I0416 16:55:51.369575   12816 fix.go:216] guest clock: 1713286551.540248133
	I0416 16:55:51.369575   12816 fix.go:229] Guest: 2024-04-16 16:55:51.540248133 +0000 UTC Remote: 2024-04-16 16:55:47.1372703 +0000 UTC m=+117.146546101 (delta=4.402977833s)
	I0416 16:55:51.369790   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:53.407581   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:53.407727   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:53.407727   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:55.663769   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:55.663769   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:55.667543   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:55.667688   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:55.667688   12816 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713286551
	I0416 16:55:55.810591   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 16:55:51 UTC 2024
	
	I0416 16:55:55.810700   12816 fix.go:236] clock set: Tue Apr 16 16:55:51 UTC 2024
	 (err=<nil>)
	I0416 16:55:55.810700   12816 start.go:83] releasing machines lock for "ha-022600", held for 2m0.7946995s
	I0416 16:55:55.810965   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:57.710878   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:57.710878   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:57.711672   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:59.985139   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:59.985210   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:59.988730   12816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:55:59.988803   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:59.998550   12816 ssh_runner.go:195] Run: cat /version.json
	I0416 16:55:59.998550   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:01.993954   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:01.993954   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:01.993954   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:01.995788   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:01.995959   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:01.996084   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:04.379274   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:56:04.379356   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:04.379701   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:56:04.391360   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:56:04.392161   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:04.392520   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:56:04.469159   12816 ssh_runner.go:235] Completed: cat /version.json: (4.4703555s)
	I0416 16:56:04.479363   12816 ssh_runner.go:195] Run: systemctl --version
	I0416 16:56:04.584079   12816 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5950892s)
	I0416 16:56:04.593130   12816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 16:56:04.602217   12816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:56:04.610705   12816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:56:04.639084   12816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 16:56:04.639119   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:56:04.639119   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:56:04.684127   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 16:56:04.713899   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 16:56:04.734297   12816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 16:56:04.745020   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 16:56:04.776657   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:56:04.806087   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 16:56:04.854166   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:56:04.890388   12816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:56:04.918140   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 16:56:04.946595   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 16:56:04.975408   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 16:56:05.001633   12816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:56:05.028505   12816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:56:05.053299   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:05.230466   12816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 16:56:05.260161   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:56:05.269988   12816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 16:56:05.302694   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:56:05.335619   12816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 16:56:05.368663   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:56:05.402792   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:56:05.435612   12816 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 16:56:05.483431   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:56:05.505797   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:56:05.548843   12816 ssh_runner.go:195] Run: which cri-dockerd
	I0416 16:56:05.563980   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 16:56:05.582552   12816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 16:56:05.624048   12816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 16:56:05.804495   12816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 16:56:05.984936   12816 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 16:56:05.985183   12816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 16:56:06.032244   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:06.217075   12816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 16:56:08.662995   12816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4457805s)
	I0416 16:56:08.670977   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0416 16:56:08.701542   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 16:56:08.730698   12816 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0416 16:56:08.941813   12816 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0416 16:56:09.145939   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:09.331138   12816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0416 16:56:09.370232   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 16:56:09.409657   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:09.615575   12816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0416 16:56:09.726879   12816 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0416 16:56:09.737760   12816 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0416 16:56:09.746450   12816 start.go:562] Will wait 60s for crictl version
	I0416 16:56:09.755840   12816 ssh_runner.go:195] Run: which crictl
	I0416 16:56:09.771470   12816 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 16:56:09.827603   12816 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0416 16:56:09.836477   12816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 16:56:09.874967   12816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 16:56:09.907967   12816 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
	I0416 16:56:09.908249   12816 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:6f:a4 Flags:up|broadcast|multicast|running}
	I0416 16:56:09.917049   12816 ip.go:210] interface addr: fe80::6b96:eca7:5afa:def5/64
	I0416 16:56:09.917049   12816 ip.go:210] interface addr: 172.19.80.1/20
	I0416 16:56:09.924842   12816 ssh_runner.go:195] Run: grep 172.19.80.1	host.minikube.internal$ /etc/hosts
	I0416 16:56:09.931830   12816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:56:09.968931   12816 kubeadm.go:877] updating cluster {Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 16:56:09.968931   12816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:56:09.975955   12816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 16:56:09.996899   12816 docker.go:685] Got preloaded images: 
	I0416 16:56:09.996899   12816 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0416 16:56:10.008276   12816 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0416 16:56:10.035609   12816 ssh_runner.go:195] Run: which lz4
	I0416 16:56:10.042582   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0416 16:56:10.050849   12816 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 16:56:10.058074   12816 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 16:56:10.058074   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0416 16:56:11.721910   12816 docker.go:649] duration metric: took 1.6789563s to copy over tarball
	I0416 16:56:11.731181   12816 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 16:56:20.333529   12816 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.60186s)
	I0416 16:56:20.333529   12816 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 16:56:20.400516   12816 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0416 16:56:20.419486   12816 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0416 16:56:20.469018   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:20.655543   12816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 16:56:23.229259   12816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5734984s)
	I0416 16:56:23.240705   12816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 16:56:23.262332   12816 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0416 16:56:23.262383   12816 cache_images.go:84] Images are preloaded, skipping loading
	I0416 16:56:23.262383   12816 kubeadm.go:928] updating node { 172.19.81.207 8443 v1.29.3 docker true true} ...
	I0416 16:56:23.262383   12816 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-022600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.81.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 16:56:23.270008   12816 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0416 16:56:23.307277   12816 cni.go:84] Creating CNI manager for ""
	I0416 16:56:23.307277   12816 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0416 16:56:23.307362   12816 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 16:56:23.307406   12816 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.81.207 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-022600 NodeName:ha-022600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.81.207"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.81.207 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 16:56:23.307691   12816 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.81.207
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-022600"
	  kubeletExtraArgs:
	    node-ip: 172.19.81.207
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.81.207"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 16:56:23.307749   12816 kube-vip.go:111] generating kube-vip config ...
	I0416 16:56:23.318492   12816 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0416 16:56:23.343950   12816 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0416 16:56:23.344258   12816 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.95.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0416 16:56:23.353585   12816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 16:56:23.370542   12816 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 16:56:23.379813   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0416 16:56:23.397865   12816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0416 16:56:23.432291   12816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 16:56:23.462868   12816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0416 16:56:23.492579   12816 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1351 bytes)
	I0416 16:56:23.534977   12816 ssh_runner.go:195] Run: grep 172.19.95.254	control-plane.minikube.internal$ /etc/hosts
	I0416 16:56:23.542734   12816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.95.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:56:23.575719   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:23.754395   12816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 16:56:23.781462   12816 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600 for IP: 172.19.81.207
	I0416 16:56:23.781462   12816 certs.go:194] generating shared ca certs ...
	I0416 16:56:23.781462   12816 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:23.782411   12816 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0416 16:56:23.782411   12816 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0416 16:56:23.783651   12816 certs.go:256] generating profile certs ...
	I0416 16:56:23.784402   12816 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.key
	I0416 16:56:23.784569   12816 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.crt with IP's: []
	I0416 16:56:23.984047   12816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.crt ...
	I0416 16:56:23.984047   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.crt: {Name:mk3ebdcb7f076a09a399313f7ed3edf14403a6a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:23.985977   12816 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.key ...
	I0416 16:56:23.985977   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.key: {Name:mk94343a485b04f4b25a0ccd3245e197e7ecbec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:23.986215   12816 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648
	I0416 16:56:23.987265   12816 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.81.207 172.19.95.254]
	I0416 16:56:24.317716   12816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648 ...
	I0416 16:56:24.317716   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648: {Name:mk30f7000427979a1bcf8d6fc3995d1f7ccc170c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.319797   12816 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648 ...
	I0416 16:56:24.319797   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648: {Name:mk95e9e3e0f96031ef005f6c36470c216303a111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.320163   12816 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt
	I0416 16:56:24.331288   12816 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key
	I0416 16:56:24.332214   12816 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key
	I0416 16:56:24.332214   12816 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt with IP's: []
	I0416 16:56:24.406574   12816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt ...
	I0416 16:56:24.406574   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt: {Name:mk73158a02cd8861e90a3b76d50704b360c358ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.407013   12816 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key ...
	I0416 16:56:24.407013   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key: {Name:mk6842e2af8fadaf278ec7592edd5bd96f07c8ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.408078   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 16:56:24.408945   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0416 16:56:24.409732   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0416 16:56:24.417870   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0416 16:56:24.418145   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem (1338 bytes)
	W0416 16:56:24.418533   12816 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324_empty.pem, impossibly tiny 0 bytes
	I0416 16:56:24.418533   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0416 16:56:24.418811   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0416 16:56:24.418990   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0416 16:56:24.419161   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0416 16:56:24.419368   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem (1708 bytes)
	I0416 16:56:24.419647   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /usr/share/ca-certificates/53242.pem
	I0416 16:56:24.419767   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:24.419867   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem -> /usr/share/ca-certificates/5324.pem
	I0416 16:56:24.420003   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 16:56:24.466985   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 16:56:24.509816   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 16:56:24.554817   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 16:56:24.603006   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0416 16:56:24.646596   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 16:56:24.694120   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 16:56:24.741669   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 16:56:24.785888   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /usr/share/ca-certificates/53242.pem (1708 bytes)
	I0416 16:56:24.829403   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 16:56:24.891821   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem --> /usr/share/ca-certificates/5324.pem (1338 bytes)
	I0416 16:56:24.933883   12816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 16:56:24.975091   12816 ssh_runner.go:195] Run: openssl version
	I0416 16:56:24.994129   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/53242.pem && ln -fs /usr/share/ca-certificates/53242.pem /etc/ssl/certs/53242.pem"
	I0416 16:56:25.021821   12816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/53242.pem
	I0416 16:56:25.028512   12816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:35 /usr/share/ca-certificates/53242.pem
	I0416 16:56:25.037989   12816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/53242.pem
	I0416 16:56:25.054924   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/53242.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 16:56:25.080011   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 16:56:25.106815   12816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:25.113980   12816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:25.126339   12816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:25.144599   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 16:56:25.170309   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5324.pem && ln -fs /usr/share/ca-certificates/5324.pem /etc/ssl/certs/5324.pem"
	I0416 16:56:25.199080   12816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5324.pem
	I0416 16:56:25.206080   12816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:35 /usr/share/ca-certificates/5324.pem
	I0416 16:56:25.214031   12816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5324.pem
	I0416 16:56:25.237026   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5324.pem /etc/ssl/certs/51391683.0"
	I0416 16:56:25.266837   12816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 16:56:25.273408   12816 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 16:56:25.273858   12816 kubeadm.go:391] StartCluster: {Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clu
sterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:56:25.281991   12816 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0416 16:56:25.314891   12816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0416 16:56:25.342248   12816 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 16:56:25.368032   12816 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 16:56:25.385737   12816 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 16:56:25.385737   12816 kubeadm.go:156] found existing configuration files:
	
	I0416 16:56:25.393851   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 16:56:25.410393   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 16:56:25.421874   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 16:56:25.453762   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 16:56:25.468769   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 16:56:25.477353   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 16:56:25.501898   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 16:56:25.515888   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 16:56:25.524885   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 16:56:25.548518   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 16:56:25.563660   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 16:56:25.572269   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 16:56:25.587981   12816 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 16:56:25.791977   12816 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 16:56:25.791977   12816 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 16:56:25.958638   12816 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 16:56:25.959035   12816 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 16:56:25.959403   12816 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 16:56:26.228464   12816 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 16:56:26.229544   12816 out.go:204]   - Generating certificates and keys ...
	I0416 16:56:26.229862   12816 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 16:56:26.230882   12816 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 16:56:26.359024   12816 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0416 16:56:26.583044   12816 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0416 16:56:26.715543   12816 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0416 16:56:27.014892   12816 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0416 16:56:27.414264   12816 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0416 16:56:27.414467   12816 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-022600 localhost] and IPs [172.19.81.207 127.0.0.1 ::1]
	I0416 16:56:27.642396   12816 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0416 16:56:27.642770   12816 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-022600 localhost] and IPs [172.19.81.207 127.0.0.1 ::1]
	I0416 16:56:27.844566   12816 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0416 16:56:28.089475   12816 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0416 16:56:28.543900   12816 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0416 16:56:28.548586   12816 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 16:56:29.051829   12816 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 16:56:29.485679   12816 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 16:56:29.830737   12816 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 16:56:30.055972   12816 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 16:56:30.305118   12816 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 16:56:30.305118   12816 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 16:56:30.310446   12816 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 16:56:30.311113   12816 out.go:204]   - Booting up control plane ...
	I0416 16:56:30.311289   12816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 16:56:30.311970   12816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 16:56:30.317049   12816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 16:56:30.342443   12816 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 16:56:30.345140   12816 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 16:56:30.345140   12816 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 16:56:30.526725   12816 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 16:56:37.142045   12816 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.615653 seconds
	I0416 16:56:37.159025   12816 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 16:56:37.175108   12816 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 16:56:37.707867   12816 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 16:56:37.708715   12816 kubeadm.go:309] [mark-control-plane] Marking the node ha-022600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 16:56:38.222729   12816 kubeadm.go:309] [bootstrap-token] Using token: a3r5qn.ikva200bfcppykg5
	I0416 16:56:38.223819   12816 out.go:204]   - Configuring RBAC rules ...
	I0416 16:56:38.224231   12816 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 16:56:38.232416   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 16:56:38.244982   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 16:56:38.249926   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 16:56:38.257723   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 16:56:38.262029   12816 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 16:56:38.279883   12816 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 16:56:38.592701   12816 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 16:56:38.638273   12816 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 16:56:38.639572   12816 kubeadm.go:309] 
	I0416 16:56:38.640154   12816 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 16:56:38.640230   12816 kubeadm.go:309] 
	I0416 16:56:38.640982   12816 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 16:56:38.641038   12816 kubeadm.go:309] 
	I0416 16:56:38.641299   12816 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 16:56:38.641581   12816 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 16:56:38.641765   12816 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 16:56:38.641765   12816 kubeadm.go:309] 
	I0416 16:56:38.641989   12816 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 16:56:38.642031   12816 kubeadm.go:309] 
	I0416 16:56:38.642184   12816 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 16:56:38.642228   12816 kubeadm.go:309] 
	I0416 16:56:38.642350   12816 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 16:56:38.642660   12816 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 16:56:38.642862   12816 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 16:56:38.642900   12816 kubeadm.go:309] 
	I0416 16:56:38.643166   12816 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 16:56:38.643426   12816 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 16:56:38.643426   12816 kubeadm.go:309] 
	I0416 16:56:38.643613   12816 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token a3r5qn.ikva200bfcppykg5 \
	I0416 16:56:38.643867   12816 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c \
	I0416 16:56:38.643909   12816 kubeadm.go:309] 	--control-plane 
	I0416 16:56:38.643961   12816 kubeadm.go:309] 
	I0416 16:56:38.644233   12816 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 16:56:38.644272   12816 kubeadm.go:309] 
	I0416 16:56:38.644444   12816 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token a3r5qn.ikva200bfcppykg5 \
	I0416 16:56:38.644734   12816 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c 
	I0416 16:56:38.647455   12816 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 16:56:38.647488   12816 cni.go:84] Creating CNI manager for ""
	I0416 16:56:38.647539   12816 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0416 16:56:38.648246   12816 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0416 16:56:38.657141   12816 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0416 16:56:38.671263   12816 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0416 16:56:38.671263   12816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0416 16:56:38.722410   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0416 16:56:39.265655   12816 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 16:56:39.279279   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:39.279279   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-022600 minikube.k8s.io/updated_at=2024_04_16T16_56_39_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4 minikube.k8s.io/name=ha-022600 minikube.k8s.io/primary=true
	I0416 16:56:39.290244   12816 ops.go:34] apiserver oom_adj: -16
	I0416 16:56:39.441163   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:39.950155   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:40.453751   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:40.955147   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:41.455931   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:41.953044   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:42.454696   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:42.949299   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:43.454962   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:43.953447   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:44.456402   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:44.956686   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:45.449476   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:45.951602   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:46.451988   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:46.949212   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:47.449356   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:47.950703   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:48.458777   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:48.956811   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:49.456669   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:49.943595   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:50.443906   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:50.950503   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:51.454863   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:51.944285   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:52.083562   12816 kubeadm.go:1107] duration metric: took 12.8170858s to wait for elevateKubeSystemPrivileges
	W0416 16:56:52.083816   12816 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 16:56:52.083816   12816 kubeadm.go:393] duration metric: took 26.808438s to StartCluster
	I0416 16:56:52.083816   12816 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:52.084214   12816 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:56:52.086643   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:52.088384   12816 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 16:56:52.088384   12816 start.go:240] waiting for startup goroutines ...
	I0416 16:56:52.088384   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0416 16:56:52.088384   12816 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 16:56:52.088630   12816 addons.go:69] Setting storage-provisioner=true in profile "ha-022600"
	I0416 16:56:52.088732   12816 addons.go:234] Setting addon storage-provisioner=true in "ha-022600"
	I0416 16:56:52.088732   12816 addons.go:69] Setting default-storageclass=true in profile "ha-022600"
	I0416 16:56:52.088850   12816 host.go:66] Checking if "ha-022600" exists ...
	I0416 16:56:52.088964   12816 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-022600"
	I0416 16:56:52.088964   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:56:52.090289   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:52.090671   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:52.207597   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.80.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0416 16:56:52.469504   12816 start.go:946] {"host.minikube.internal": 172.19.80.1} host record injected into CoreDNS's ConfigMap
	I0416 16:56:54.164683   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:54.164683   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:54.165583   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:54.165635   12816 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:56:54.165635   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:54.166734   12816 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 16:56:54.166340   12816 kapi.go:59] client config for ha-022600: &rest.Config{Host:"https://172.19.95.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-022600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-022600\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef16c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 16:56:54.167133   12816 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 16:56:54.167133   12816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 16:56:54.167133   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:54.167791   12816 cert_rotation.go:137] Starting client certificate rotation controller
	I0416 16:56:54.168180   12816 addons.go:234] Setting addon default-storageclass=true in "ha-022600"
	I0416 16:56:54.168347   12816 host.go:66] Checking if "ha-022600" exists ...
	I0416 16:56:54.169251   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:56.312581   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:56.312988   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:56.313046   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:56.313270   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:56.313270   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:56.313270   12816 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 16:56:56.313270   12816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 16:56:56.313270   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:58.330392   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:58.330392   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:58.330966   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:58.735727   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:56:58.735876   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:58.736103   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:56:58.898469   12816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 16:57:00.675802   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:57:00.675802   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:00.676245   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:57:00.828151   12816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 16:57:01.248041   12816 round_trippers.go:463] GET https://172.19.95.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0416 16:57:01.248041   12816 round_trippers.go:469] Request Headers:
	I0416 16:57:01.248041   12816 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:57:01.248041   12816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:57:01.261890   12816 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0416 16:57:01.262478   12816 round_trippers.go:463] PUT https://172.19.95.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0416 16:57:01.262478   12816 round_trippers.go:469] Request Headers:
	I0416 16:57:01.262478   12816 round_trippers.go:473]     Content-Type: application/json
	I0416 16:57:01.262478   12816 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:57:01.262478   12816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:57:01.268964   12816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 16:57:01.269995   12816 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0416 16:57:01.270495   12816 addons.go:505] duration metric: took 9.181591s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0416 16:57:01.270576   12816 start.go:245] waiting for cluster config update ...
	I0416 16:57:01.270618   12816 start.go:254] writing updated cluster config ...
	I0416 16:57:01.271859   12816 out.go:177] 
	I0416 16:57:01.284169   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:57:01.284169   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:57:01.285951   12816 out.go:177] * Starting "ha-022600-m02" control-plane node in "ha-022600" cluster
	I0416 16:57:01.286952   12816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:57:01.286952   12816 cache.go:56] Caching tarball of preloaded images
	I0416 16:57:01.286952   12816 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 16:57:01.286952   12816 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 16:57:01.286952   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:57:01.296247   12816 start.go:360] acquireMachinesLock for ha-022600-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:57:01.297324   12816 start.go:364] duration metric: took 1.0773ms to acquireMachinesLock for "ha-022600-m02"
	I0416 16:57:01.297559   12816 start.go:93] Provisioning new machine with config: &{Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 16:57:01.297559   12816 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0416 16:57:01.297559   12816 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 16:57:01.297559   12816 start.go:159] libmachine.API.Create for "ha-022600" (driver="hyperv")
	I0416 16:57:01.297559   12816 client.go:168] LocalClient.Create starting
	I0416 16:57:01.298838   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0416 16:57:01.299147   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:57:01.299147   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:57:01.299293   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0416 16:57:01.299468   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:57:01.299468   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:57:01.299468   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0416 16:57:03.017072   12816 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0416 16:57:03.017279   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:03.017366   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0416 16:57:04.580895   12816 main.go:141] libmachine: [stdout =====>] : False
	
	I0416 16:57:04.580895   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:04.580895   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:57:05.984295   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:57:05.984295   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:05.984295   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:57:09.314760   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:57:09.314760   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:09.316740   12816 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 16:57:09.669552   12816 main.go:141] libmachine: Creating SSH key...
	I0416 16:57:10.010472   12816 main.go:141] libmachine: Creating VM...
	I0416 16:57:10.010472   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:57:12.690022   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:57:12.690022   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:12.690107   12816 main.go:141] libmachine: Using switch "Default Switch"
	I0416 16:57:12.690185   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:57:14.267157   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:57:14.267157   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:14.267157   12816 main.go:141] libmachine: Creating VHD
	I0416 16:57:14.267157   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0416 16:57:17.749511   12816 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : FE960248-03C1-43D6-B7AE-A60D4C86308B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0416 16:57:17.749511   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:17.749511   12816 main.go:141] libmachine: Writing magic tar header
	I0416 16:57:17.749511   12816 main.go:141] libmachine: Writing SSH key tar header
	I0416 16:57:17.758158   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0416 16:57:20.709379   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:20.709950   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:20.710019   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\disk.vhd' -SizeBytes 20000MB
	I0416 16:57:23.025729   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:23.025729   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:23.025729   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-022600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0416 16:57:26.131923   12816 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-022600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0416 16:57:26.131923   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:26.131923   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-022600-m02 -DynamicMemoryEnabled $false
	I0416 16:57:28.159153   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:28.159229   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:28.159409   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-022600-m02 -Count 2
	I0416 16:57:30.126033   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:30.126033   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:30.126033   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-022600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\boot2docker.iso'
	I0416 16:57:32.420739   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:32.421735   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:32.421878   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-022600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\disk.vhd'
	I0416 16:57:34.779822   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:34.779822   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:34.779822   12816 main.go:141] libmachine: Starting VM...
	I0416 16:57:34.780971   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-022600-m02
	I0416 16:57:37.369505   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:37.369687   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:37.369687   12816 main.go:141] libmachine: Waiting for host to start...
	I0416 16:57:37.369767   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:39.415029   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:39.415029   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:39.415286   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:41.685132   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:41.685132   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:42.700464   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:44.674039   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:44.674039   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:44.674039   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:46.993492   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:46.993492   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:48.000886   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:49.992438   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:49.992894   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:49.992930   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:52.274971   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:52.274971   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:53.290891   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:55.287716   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:55.287962   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:55.288037   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:57.564053   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:57.564053   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:58.572803   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:00.584542   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:00.584542   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:00.584542   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:02.905327   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:02.905327   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:02.905391   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:04.899133   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:04.899133   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:04.899479   12816 machine.go:94] provisionDockerMachine start ...
	I0416 16:58:04.899479   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:06.914221   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:06.914221   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:06.914869   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:09.273511   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:09.273546   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:09.277783   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:09.278406   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:09.278406   12816 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 16:58:09.413281   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 16:58:09.413281   12816 buildroot.go:166] provisioning hostname "ha-022600-m02"
	I0416 16:58:09.413281   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:11.438626   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:11.438626   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:11.439079   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:13.801181   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:13.801181   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:13.805295   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:13.805684   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:13.805684   12816 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-022600-m02 && echo "ha-022600-m02" | sudo tee /etc/hostname
	I0416 16:58:13.957933   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-022600-m02
	
	I0416 16:58:13.958021   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:15.863768   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:15.863768   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:15.863768   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:18.176996   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:18.178002   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:18.182057   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:18.182681   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:18.182681   12816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-022600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-022600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-022600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:58:18.315751   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:58:18.315853   12816 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 16:58:18.315853   12816 buildroot.go:174] setting up certificates
	I0416 16:58:18.315853   12816 provision.go:84] configureAuth start
	I0416 16:58:18.315853   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:20.243862   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:20.243928   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:20.243928   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:22.525833   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:22.525945   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:22.526057   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:24.418671   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:24.418671   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:24.418894   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:26.735560   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:26.735560   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:26.735560   12816 provision.go:143] copyHostCerts
	I0416 16:58:26.736546   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 16:58:26.736627   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 16:58:26.736627   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 16:58:26.736627   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 16:58:26.737290   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 16:58:26.737900   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 16:58:26.737900   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 16:58:26.738191   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 16:58:26.738908   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 16:58:26.738977   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 16:58:26.738977   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 16:58:26.738977   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 16:58:26.739652   12816 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-022600-m02 san=[127.0.0.1 172.19.80.125 ha-022600-m02 localhost minikube]
	I0416 16:58:26.917277   12816 provision.go:177] copyRemoteCerts
	I0416 16:58:26.926308   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:58:26.926600   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:28.829360   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:28.829360   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:28.830343   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:31.113681   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:31.113681   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:31.113681   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:58:31.229222   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3026703s)
	I0416 16:58:31.229222   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 16:58:31.229700   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 16:58:31.279666   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 16:58:31.280307   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0416 16:58:31.328101   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 16:58:31.328245   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 16:58:31.382563   12816 provision.go:87] duration metric: took 13.065969s to configureAuth
	I0416 16:58:31.382563   12816 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:58:31.383343   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:58:31.383343   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:33.331199   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:33.331199   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:33.331275   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:35.653673   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:35.653721   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:35.656855   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:35.657430   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:35.657430   12816 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 16:58:35.781565   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 16:58:35.781565   12816 buildroot.go:70] root file system type: tmpfs
	I0416 16:58:35.781565   12816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 16:58:35.782090   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:37.695478   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:37.695478   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:37.696344   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:39.956169   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:39.956169   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:39.961057   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:39.961515   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:39.961564   12816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.81.207"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 16:58:40.123664   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.81.207
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 16:58:40.123818   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:42.064878   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:42.064974   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:42.065152   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:44.326252   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:44.326252   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:44.330103   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:44.330731   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:44.330731   12816 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 16:58:46.283136   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 16:58:46.283253   12816 machine.go:97] duration metric: took 41.3814279s to provisionDockerMachine
	I0416 16:58:46.283253   12816 client.go:171] duration metric: took 1m44.9797412s to LocalClient.Create
	I0416 16:58:46.283253   12816 start.go:167] duration metric: took 1m44.9797412s to libmachine.API.Create "ha-022600"
	I0416 16:58:46.283253   12816 start.go:293] postStartSetup for "ha-022600-m02" (driver="hyperv")
	I0416 16:58:46.283345   12816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:58:46.292724   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:58:46.292724   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:48.207625   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:48.207625   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:48.207625   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:50.480821   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:50.480821   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:50.480821   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:58:50.575284   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2823171s)
	I0416 16:58:50.584260   12816 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:58:50.591292   12816 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:58:50.591292   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 16:58:50.591292   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 16:58:50.591900   12816 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 16:58:50.591900   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 16:58:50.601073   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 16:58:50.618807   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 16:58:50.671301   12816 start.go:296] duration metric: took 4.3877068s for postStartSetup
	I0416 16:58:50.673161   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:52.621684   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:52.622252   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:52.622252   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:54.923435   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:54.923435   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:54.923763   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:58:54.926483   12816 start.go:128] duration metric: took 1m53.622481s to createHost
	I0416 16:58:54.926657   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:56.793105   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:56.793105   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:56.793184   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:59.024255   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:59.025184   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:59.029108   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:59.029633   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:59.029730   12816 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 16:58:59.149333   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713286739.315259098
	
	I0416 16:58:59.149333   12816 fix.go:216] guest clock: 1713286739.315259098
	I0416 16:58:59.149333   12816 fix.go:229] Guest: 2024-04-16 16:58:59.315259098 +0000 UTC Remote: 2024-04-16 16:58:54.9265716 +0000 UTC m=+304.925199701 (delta=4.388687498s)
	I0416 16:58:59.149333   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:01.054656   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:01.054831   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:01.054831   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:03.303195   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:03.303195   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:03.307071   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:59:03.307459   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:59:03.307531   12816 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713286739
	I0416 16:59:03.449024   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 16:58:59 UTC 2024
	
	I0416 16:59:03.449024   12816 fix.go:236] clock set: Tue Apr 16 16:58:59 UTC 2024
	 (err=<nil>)
	I0416 16:59:03.449024   12816 start.go:83] releasing machines lock for "ha-022600-m02", held for 2m2.1447745s
	I0416 16:59:03.450039   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:05.434998   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:05.434998   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:05.434998   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:07.737918   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:07.737918   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:07.739042   12816 out.go:177] * Found network options:
	I0416 16:59:07.739784   12816 out.go:177]   - NO_PROXY=172.19.81.207
	W0416 16:59:07.740404   12816 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 16:59:07.741027   12816 out.go:177]   - NO_PROXY=172.19.81.207
	W0416 16:59:07.741505   12816 proxy.go:119] fail to check proxy env: Error ip not in block
	W0416 16:59:07.742708   12816 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 16:59:07.744988   12816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:59:07.745153   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:07.752817   12816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0416 16:59:07.752817   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:09.758953   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:12.157582   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:12.158536   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:12.159044   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:59:12.184719   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:12.184719   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:12.185179   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:59:12.257436   12816 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5043642s)
	W0416 16:59:12.257436   12816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:59:12.266545   12816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:59:12.367206   12816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 16:59:12.367296   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:59:12.367330   12816 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6219642s)
	I0416 16:59:12.367330   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:59:12.423201   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 16:59:12.453988   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 16:59:12.472992   12816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 16:59:12.482991   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 16:59:12.510864   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:59:12.538866   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 16:59:12.565866   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:59:12.597751   12816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:59:12.622761   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 16:59:12.648905   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 16:59:12.674904   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 16:59:12.713452   12816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:59:12.741495   12816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:59:12.768497   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:59:12.975524   12816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 16:59:13.011635   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:59:13.023647   12816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 16:59:13.058146   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:59:13.091991   12816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 16:59:13.139058   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:59:13.173081   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:59:13.208242   12816 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 16:59:13.259511   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:59:13.282094   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:59:13.329081   12816 ssh_runner.go:195] Run: which cri-dockerd
	I0416 16:59:13.344832   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 16:59:13.362131   12816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 16:59:13.403377   12816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 16:59:13.597444   12816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 16:59:13.768147   12816 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 16:59:13.768278   12816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 16:59:13.808294   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:59:13.987216   12816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 17:00:15.104612   12816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1138396s)
	I0416 17:00:15.115049   12816 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0416 17:00:15.145752   12816 out.go:177] 
	W0416 17:00:15.146473   12816 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 16 16:58:45 ha-022600-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.076842920Z" level=info msg="Starting up"
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.077687177Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.078706068Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.109665355Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138411128Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138448735Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138508447Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138523049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138600164Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138632670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138848110Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138955930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139030244Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139045347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139142365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139433520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142495192Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142588309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142778845Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142795748Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143044695Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143174419Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143191422Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.152862930Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153144583Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153313214Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153337519Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153354522Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153467543Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153957434Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154159572Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154195179Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154212082Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154230586Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154258491Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154272393Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154287696Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154303599Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154317302Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154330504Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154344107Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154373612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154392516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154406618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154421121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154434024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154447526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154460128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154474031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154498536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154514539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154525841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154555046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154568249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154583952Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154604755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154629960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154642062Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154700973Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154916114Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155014532Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155030135Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155203567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155302486Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155325090Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155706861Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155796078Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155907599Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155947306Z" level=info msg="containerd successfully booted in 0.047582s"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.119001526Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.129323458Z" level=info msg="Loading containers: start."
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.358382320Z" level=info msg="Loading containers: done."
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.377033580Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.377149301Z" level=info msg="Daemon has completed initialization"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.447556885Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 16:58:46 ha-022600-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.449134569Z" level=info msg="API listen on [::]:2376"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.178053148Z" level=info msg="Processing signal 'terminated'"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.179830517Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 16:59:14 ha-022600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.180814055Z" level=info msg="Daemon shutdown complete"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.181020363Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.181054564Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 16:59:15 ha-022600-m02 dockerd[1019]: time="2024-04-16T16:59:15.248212596Z" level=info msg="Starting up"
	Apr 16 17:00:15 ha-022600-m02 dockerd[1019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0416 17:00:15.146611   12816 out.go:239] * 
	W0416 17:00:15.147806   12816 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 17:00:15.148383   12816 out.go:177] 
	
	
	==> Docker <==
	Apr 16 17:13:05 ha-022600 dockerd[1325]: 2024/04/16 17:13:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:05 ha-022600 dockerd[1325]: 2024/04/16 17:13:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:05 ha-022600 dockerd[1325]: 2024/04/16 17:13:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:05 ha-022600 dockerd[1325]: 2024/04/16 17:13:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:05 ha-022600 dockerd[1325]: 2024/04/16 17:13:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:05 ha-022600 dockerd[1325]: 2024/04/16 17:13:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:05 ha-022600 dockerd[1325]: 2024/04/16 17:13:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:05 ha-022600 dockerd[1325]: 2024/04/16 17:13:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:47 ha-022600 dockerd[1325]: 2024/04/16 17:13:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:47 ha-022600 dockerd[1325]: 2024/04/16 17:13:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:47 ha-022600 dockerd[1325]: 2024/04/16 17:13:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:47 ha-022600 dockerd[1325]: 2024/04/16 17:13:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:47 ha-022600 dockerd[1325]: 2024/04/16 17:13:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:47 ha-022600 dockerd[1325]: 2024/04/16 17:13:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:47 ha-022600 dockerd[1325]: 2024/04/16 17:13:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:47 ha-022600 dockerd[1325]: 2024/04/16 17:13:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:47 ha-022600 dockerd[1325]: 2024/04/16 17:13:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:17:46 ha-022600 dockerd[1325]: 2024/04/16 17:17:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:17:46 ha-022600 dockerd[1325]: 2024/04/16 17:17:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:17:47 ha-022600 dockerd[1325]: 2024/04/16 17:17:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:17:47 ha-022600 dockerd[1325]: 2024/04/16 17:17:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:17:47 ha-022600 dockerd[1325]: 2024/04/16 17:17:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:17:47 ha-022600 dockerd[1325]: 2024/04/16 17:17:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:17:47 ha-022600 dockerd[1325]: 2024/04/16 17:17:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:17:47 ha-022600 dockerd[1325]: 2024/04/16 17:17:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d38b1a5f4caa8       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   17 minutes ago      Running             busybox                   0                   8a4de3aa24af1       busybox-7fdf7869d9-rpfpf
	3fe545bfad4e6       cbb01a7bd410d                                                                                         21 minutes ago      Running             coredns                   0                   093278b3840ef       coredns-76f75df574-qm89x
	979dee88be2b4       cbb01a7bd410d                                                                                         21 minutes ago      Running             coredns                   0                   4ad38b0d59335       coredns-76f75df574-ww2r6
	257879ecf06b2       6e38f40d628db                                                                                         21 minutes ago      Running             storage-provisioner       0                   bf991c3e34e2d       storage-provisioner
	be245de9ef545       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              21 minutes ago      Running             kindnet-cni               0                   92c35b3fd0967       kindnet-mwqvl
	05db92f49e0df       a1d263b5dc5b0                                                                                         21 minutes ago      Running             kube-proxy                0                   12380f49c1509       kube-proxy-2vddt
	d1ba82cd26254       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     22 minutes ago      Running             kube-vip                  0                   fa2c75c4c8d59       kube-vip-ha-022600
	a7fb69539df62       6052a25da3f97                                                                                         22 minutes ago      Running             kube-controller-manager   0                   b536621e20d4b       kube-controller-manager-ha-022600
	4fd5df8c9fd37       39f995c9f1996                                                                                         22 minutes ago      Running             kube-apiserver            0                   5a7a1e80caeb4       kube-apiserver-ha-022600
	e042d71e8b0e8       8c390d98f50c0                                                                                         22 minutes ago      Running             kube-scheduler            0                   5a2551c91a1b6       kube-scheduler-ha-022600
	c29b0762ff0bf       3861cfcd7c04c                                                                                         22 minutes ago      Running             etcd                      0                   c8a9aa3126cf5       etcd-ha-022600
	
	
	==> coredns [3fe545bfad4e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = db872c9fdf31f8d8ff61123f2a1e38a38b951fa043b9e42cdb76f01d23889e560885a7bdef735e757fd28e65f13e44b1d5d7b5def31861f6a98cd0279fbc18c8
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:55981 - 4765 "HINFO IN 3735046377920793891.8143170502200932773. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.058936595s
	[INFO] 10.244.0.4:43350 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000388921s
	[INFO] 10.244.0.4:35317 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.052221997s
	[INFO] 10.244.0.4:52074 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.040069369s
	[INFO] 10.244.0.4:49068 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.053312593s
	[INFO] 10.244.0.4:54711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123507s
	[INFO] 10.244.0.4:44694 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037006811s
	[INFO] 10.244.0.4:33399 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124606s
	[INFO] 10.244.0.4:37329 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000241612s
	[INFO] 10.244.0.4:57333 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131407s
	[INFO] 10.244.0.4:38806 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060403s
	[INFO] 10.244.0.4:48948 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000263914s
	[INFO] 10.244.0.4:51825 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000177309s
	[INFO] 10.244.0.4:53272 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00018301s
	
	
	==> coredns [979dee88be2b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = db872c9fdf31f8d8ff61123f2a1e38a38b951fa043b9e42cdb76f01d23889e560885a7bdef735e757fd28e65f13e44b1d5d7b5def31861f6a98cd0279fbc18c8
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50127 - 24072 "HINFO IN 7665836187497317301.2285362183679153792. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027543487s
	[INFO] 10.244.0.4:34822 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000224011s
	[INFO] 10.244.0.4:48911 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000349218s
	[INFO] 10.244.0.4:43369 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.023699624s
	[INFO] 10.244.0.4:56309 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000258914s
	[INFO] 10.244.0.4:36791 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.003463479s
	[INFO] 10.244.0.4:55996 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000301816s
	[INFO] 10.244.0.4:35967 - 5 "PTR IN 1.80.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000116506s
	
	
	==> describe nodes <==
	Name:               ha-022600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-022600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=ha-022600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T16_56_39_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 16:56:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-022600
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 17:18:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 17:16:34 +0000   Tue, 16 Apr 2024 16:56:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 17:16:34 +0000   Tue, 16 Apr 2024 16:56:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 17:16:34 +0000   Tue, 16 Apr 2024 16:56:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 17:16:34 +0000   Tue, 16 Apr 2024 16:57:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.81.207
	  Hostname:    ha-022600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b4674338fa494bbcb2e21e2b4385c5e1
	  System UUID:                201025fc-0c03-cc49-a194-29d6500971a2
	  Boot ID:                    6ae5bedd-6e8e-4f58-b08c-8e9912fd04de
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-rpfpf             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 coredns-76f75df574-qm89x             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 coredns-76f75df574-ww2r6             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-ha-022600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kindnet-mwqvl                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21m
	  kube-system                 kube-apiserver-ha-022600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-ha-022600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-2vddt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-ha-022600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-vip-ha-022600                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 21m   kube-proxy       
	  Normal  Starting                 21m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m   kubelet          Node ha-022600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m   kubelet          Node ha-022600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m   kubelet          Node ha-022600 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m   node-controller  Node ha-022600 event: Registered Node ha-022600 in Controller
	  Normal  NodeReady                21m   kubelet          Node ha-022600 status is now: NodeReady
	
	
	Name:               ha-022600-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-022600-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=ha-022600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_16T17_16_38_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 17:16:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-022600-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 17:18:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 17:17:08 +0000   Tue, 16 Apr 2024 17:16:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 17:17:08 +0000   Tue, 16 Apr 2024 17:16:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 17:17:08 +0000   Tue, 16 Apr 2024 17:16:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 17:17:08 +0000   Tue, 16 Apr 2024 17:16:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.93.94
	  Hostname:    ha-022600-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 cefa169b716045589e59382d0939ad48
	  System UUID:                25782c5b-4e02-0547-b063-db6b9c5f1f5b
	  Boot ID:                    e7c67d41-aa2d-47a1-952b-fa7ff5422e05
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7c2px       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      117s
	  kube-system                 kube-proxy-ss5lp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 107s                 kube-proxy       
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  117s (x2 over 117s)  kubelet          Node ha-022600-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s (x2 over 117s)  kubelet          Node ha-022600-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s (x2 over 117s)  kubelet          Node ha-022600-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  117s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           113s                 node-controller  Node ha-022600-m03 event: Registered Node ha-022600-m03 in Controller
	  Normal  NodeReady                100s                 kubelet          Node ha-022600-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.656516] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr16 16:55] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.165290] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[Apr16 16:56] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.091843] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.493988] systemd-fstab-generator[987]: Ignoring "noauto" option for root device
	[  +0.172637] systemd-fstab-generator[999]: Ignoring "noauto" option for root device
	[  +0.230010] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	[  +2.695048] systemd-fstab-generator[1185]: Ignoring "noauto" option for root device
	[  +0.219400] systemd-fstab-generator[1197]: Ignoring "noauto" option for root device
	[  +0.196554] systemd-fstab-generator[1209]: Ignoring "noauto" option for root device
	[  +0.267217] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
	[ +11.053282] systemd-fstab-generator[1317]: Ignoring "noauto" option for root device
	[  +0.095458] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.012264] systemd-fstab-generator[1521]: Ignoring "noauto" option for root device
	[  +6.758798] systemd-fstab-generator[1720]: Ignoring "noauto" option for root device
	[  +0.093227] kauditd_printk_skb: 73 callbacks suppressed
	[  +7.850543] systemd-fstab-generator[2213]: Ignoring "noauto" option for root device
	[  +0.130310] kauditd_printk_skb: 72 callbacks suppressed
	[ +15.381320] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.386371] kauditd_printk_skb: 29 callbacks suppressed
	[Apr16 17:00] hrtimer: interrupt took 5042261 ns
	[  +0.908827] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [c29b0762ff0b] <==
	{"level":"info","ts":"2024-04-16T17:06:33.350784Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":964}
	{"level":"info","ts":"2024-04-16T17:06:33.393755Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":964,"took":"42.49244ms","hash":1730924367,"current-db-size-bytes":2433024,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2433024,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-04-16T17:06:33.395361Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1730924367,"revision":964,"compact-revision":-1}
	{"level":"info","ts":"2024-04-16T17:11:33.360995Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1502}
	{"level":"info","ts":"2024-04-16T17:11:33.366072Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1502,"took":"4.116913ms","hash":127222243,"current-db-size-bytes":2433024,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1818624,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-04-16T17:11:33.366162Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":127222243,"revision":1502,"compact-revision":964}
	{"level":"info","ts":"2024-04-16T17:15:11.421098Z","caller":"traceutil/trace.go:171","msg":"trace[1208553513] transaction","detail":"{read_only:false; response_revision:2431; number_of_response:1; }","duration":"155.410586ms","start":"2024-04-16T17:15:11.265667Z","end":"2024-04-16T17:15:11.421077Z","steps":["trace[1208553513] 'process raft request'  (duration: 155.135072ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:16:31.529032Z","caller":"traceutil/trace.go:171","msg":"trace[505251683] linearizableReadLoop","detail":"{readStateIndex:2832; appliedIndex:2831; }","duration":"107.445309ms","start":"2024-04-16T17:16:31.421572Z","end":"2024-04-16T17:16:31.529017Z","steps":["trace[505251683] 'read index received'  (duration: 107.319103ms)","trace[505251683] 'applied index is now lower than readState.Index'  (duration: 125.606µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-16T17:16:31.529184Z","caller":"traceutil/trace.go:171","msg":"trace[359290184] transaction","detail":"{read_only:false; response_revision:2575; number_of_response:1; }","duration":"197.441024ms","start":"2024-04-16T17:16:31.331735Z","end":"2024-04-16T17:16:31.529176Z","steps":["trace[359290184] 'process raft request'  (duration: 197.196912ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:16:31.529431Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.83703ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1110"}
	{"level":"info","ts":"2024-04-16T17:16:31.52969Z","caller":"traceutil/trace.go:171","msg":"trace[1576069612] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2575; }","duration":"108.130545ms","start":"2024-04-16T17:16:31.421545Z","end":"2024-04-16T17:16:31.529676Z","steps":["trace[1576069612] 'agreement among raft nodes before linearized reading'  (duration: 107.801628ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:16:33.727834Z","caller":"traceutil/trace.go:171","msg":"trace[1449824028] transaction","detail":"{read_only:false; response_revision:2578; number_of_response:1; }","duration":"364.497189ms","start":"2024-04-16T17:16:33.363317Z","end":"2024-04-16T17:16:33.727815Z","steps":["trace[1449824028] 'process raft request'  (duration: 364.339681ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:16:33.729115Z","caller":"traceutil/trace.go:171","msg":"trace[948704194] linearizableReadLoop","detail":"{readStateIndex:2837; appliedIndex:2836; }","duration":"283.56914ms","start":"2024-04-16T17:16:33.445533Z","end":"2024-04-16T17:16:33.729102Z","steps":["trace[948704194] 'read index received'  (duration: 282.906606ms)","trace[948704194] 'applied index is now lower than readState.Index'  (duration: 662.034µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-16T17:16:33.72965Z","caller":"traceutil/trace.go:171","msg":"trace[1908879286] transaction","detail":"{read_only:false; response_revision:2579; number_of_response:1; }","duration":"291.495046ms","start":"2024-04-16T17:16:33.438143Z","end":"2024-04-16T17:16:33.729638Z","steps":["trace[1908879286] 'process raft request'  (duration: 290.677204ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:16:33.729668Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-16T17:16:33.363297Z","time spent":"364.643596ms","remote":"127.0.0.1:49456","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":25,"response count":0,"response size":38,"request content":"compare:<key:\"compact_rev_key\" version:3 > success:<request_put:<key:\"compact_rev_key\" value_size:4 >> failure:<request_range:<key:\"compact_rev_key\" > >"}
	{"level":"warn","ts":"2024-04-16T17:16:33.729962Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.040139ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:434"}
	{"level":"info","ts":"2024-04-16T17:16:33.73064Z","caller":"traceutil/trace.go:171","msg":"trace[1591257630] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:2579; }","duration":"186.677072ms","start":"2024-04-16T17:16:33.543885Z","end":"2024-04-16T17:16:33.730562Z","steps":["trace[1591257630] 'agreement among raft nodes before linearized reading'  (duration: 185.842129ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:16:33.730022Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"284.488987ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-16T17:16:33.731097Z","caller":"traceutil/trace.go:171","msg":"trace[339406949] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2579; }","duration":"285.581443ms","start":"2024-04-16T17:16:33.445505Z","end":"2024-04-16T17:16:33.731087Z","steps":["trace[339406949] 'agreement among raft nodes before linearized reading'  (duration: 284.501387ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:16:33.730066Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.750168ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1110"}
	{"level":"info","ts":"2024-04-16T17:16:33.731323Z","caller":"traceutil/trace.go:171","msg":"trace[1323315847] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2579; }","duration":"143.028733ms","start":"2024-04-16T17:16:33.588284Z","end":"2024-04-16T17:16:33.731313Z","steps":["trace[1323315847] 'agreement among raft nodes before linearized reading'  (duration: 141.746268ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:16:33.740796Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2041}
	{"level":"info","ts":"2024-04-16T17:16:33.745817Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2041,"took":"4.568334ms","hash":1427640317,"current-db-size-bytes":2433024,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1806336,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-04-16T17:16:33.746025Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1427640317,"revision":2041,"compact-revision":1502}
	{"level":"info","ts":"2024-04-16T17:16:40.98492Z","caller":"traceutil/trace.go:171","msg":"trace[2045382782] transaction","detail":"{read_only:false; response_revision:2627; number_of_response:1; }","duration":"150.576419ms","start":"2024-04-16T17:16:40.834317Z","end":"2024-04-16T17:16:40.984893Z","steps":["trace[2045382782] 'process raft request'  (duration: 150.385009ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:18:34 up 23 min,  0 users,  load average: 0.27, 0.27, 0.20
	Linux ha-022600 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [be245de9ef54] <==
	I0416 17:17:31.591669       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:17:41.598001       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:17:41.598040       1 main.go:227] handling current node
	I0416 17:17:41.598052       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:17:41.598058       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:17:51.610941       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:17:51.611141       1 main.go:227] handling current node
	I0416 17:17:51.611158       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:17:51.611846       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:18:01.623682       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:18:01.623890       1 main.go:227] handling current node
	I0416 17:18:01.623908       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:18:01.623919       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:18:11.638877       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:18:11.638992       1 main.go:227] handling current node
	I0416 17:18:11.639006       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:18:11.639014       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:18:21.651294       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:18:21.651329       1 main.go:227] handling current node
	I0416 17:18:21.651339       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:18:21.651344       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:18:31.666296       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:18:31.666405       1 main.go:227] handling current node
	I0416 17:18:31.666420       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:18:31.666429       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [4fd5df8c9fd3] <==
	I0416 16:56:35.510308       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0416 16:56:35.512679       1 aggregator.go:165] initial CRD sync complete...
	I0416 16:56:35.516211       1 autoregister_controller.go:141] Starting autoregister controller
	I0416 16:56:35.516249       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0416 16:56:35.516256       1 cache.go:39] Caches are synced for autoregister controller
	I0416 16:56:35.517473       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0416 16:56:35.522352       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0416 16:56:35.529558       1 controller.go:624] quota admission added evaluator for: namespaces
	I0416 16:56:35.542494       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0416 16:56:36.411016       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0416 16:56:36.418409       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0416 16:56:36.419376       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0416 16:56:37.172553       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0416 16:56:37.235069       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0416 16:56:37.370838       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0416 16:56:37.381797       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.81.207]
	I0416 16:56:37.383264       1 controller.go:624] quota admission added evaluator for: endpoints
	I0416 16:56:37.388718       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0416 16:56:37.435733       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0416 16:56:38.737496       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0416 16:56:38.764389       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0416 16:56:38.781093       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0416 16:56:51.466047       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0416 16:56:51.868826       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	http2: server: error reading preface from client 172.19.93.94:54156: read tcp 172.19.95.254:8443->172.19.93.94:54156: read: connection reset by peer
	
	
	==> kube-controller-manager [a7fb69539df6] <==
	I0416 16:57:04.995404       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="188.309µs"
	I0416 16:57:05.057328       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="96.005µs"
	I0416 16:57:05.964586       1 node_lifecycle_controller.go:1045] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0416 16:57:07.181900       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="167.009µs"
	I0416 16:57:07.224163       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="15.307781ms"
	I0416 16:57:07.224903       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="88.905µs"
	I0416 16:57:07.277301       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="19.898845ms"
	I0416 16:57:07.277810       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="53.303µs"
	I0416 17:00:45.709324       1 event.go:376] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7fdf7869d9 to 3"
	I0416 17:00:45.728545       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-rpfpf"
	I0416 17:00:45.745464       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-mnl84"
	I0416 17:00:45.756444       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-gph6r"
	I0416 17:00:45.770175       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="62.082711ms"
	I0416 17:00:45.784213       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="13.744211ms"
	I0416 17:00:45.810992       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="26.530372ms"
	I0416 17:00:45.811146       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="42.802µs"
	I0416 17:00:48.413892       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="8.465463ms"
	I0416 17:00:48.413981       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="40.302µs"
	I0416 17:16:37.436480       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-022600-m03\" does not exist"
	I0416 17:16:37.446130       1 range_allocator.go:380] "Set node PodCIDR" node="ha-022600-m03" podCIDRs=["10.244.1.0/24"]
	I0416 17:16:37.459239       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7c2px"
	I0416 17:16:37.461522       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ss5lp"
	I0416 17:16:41.186805       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-022600-m03"
	I0416 17:16:41.187824       1 event.go:376] "Event occurred" object="ha-022600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-022600-m03 event: Registered Node ha-022600-m03 in Controller"
	I0416 17:16:54.835196       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-022600-m03"
	
	
	==> kube-proxy [05db92f49e0d] <==
	I0416 16:56:54.468581       1 server_others.go:72] "Using iptables proxy"
	I0416 16:56:54.505964       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.81.207"]
	I0416 16:56:54.583838       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 16:56:54.584172       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 16:56:54.584273       1 server_others.go:168] "Using iptables Proxier"
	I0416 16:56:54.590060       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 16:56:54.590806       1 server.go:865] "Version info" version="v1.29.3"
	I0416 16:56:54.591014       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 16:56:54.592331       1 config.go:188] "Starting service config controller"
	I0416 16:56:54.592517       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 16:56:54.592625       1 config.go:97] "Starting endpoint slice config controller"
	I0416 16:56:54.592689       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 16:56:54.594058       1 config.go:315] "Starting node config controller"
	I0416 16:56:54.594215       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 16:56:54.693900       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 16:56:54.693964       1 shared_informer.go:318] Caches are synced for service config
	I0416 16:56:54.694328       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [e042d71e8b0e] <==
	W0416 16:56:36.501819       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0416 16:56:36.501922       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0416 16:56:36.507709       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0416 16:56:36.507948       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0416 16:56:36.573671       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 16:56:36.573877       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 16:56:36.602162       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 16:56:36.602205       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 16:56:36.621966       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.622272       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0416 16:56:36.648392       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0416 16:56:36.648623       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0416 16:56:36.694872       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 16:56:36.694970       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 16:56:36.804118       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.805424       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 16:56:36.821863       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.822231       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 16:56:36.866017       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 16:56:36.866298       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0416 16:56:36.904820       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 16:56:36.905097       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 16:56:36.917996       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.918036       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0416 16:56:39.298679       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 17:13:38 ha-022600 kubelet[2220]: E0416 17:13:38.996896    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:13:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:13:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:13:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:13:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:14:38 ha-022600 kubelet[2220]: E0416 17:14:38.994207    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:14:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:14:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:14:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:14:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:15:38 ha-022600 kubelet[2220]: E0416 17:15:38.994251    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:15:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:15:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:15:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:15:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:16:38 ha-022600 kubelet[2220]: E0416 17:16:38.994203    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:16:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:16:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:16:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:16:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:17:38 ha-022600 kubelet[2220]: E0416 17:17:38.995310    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:17:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:17:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:17:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:17:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 17:18:27.309706   13768 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-022600 -n ha-022600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-022600 -n ha-022600: (10.7973952s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-022600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7fdf7869d9-gph6r busybox-7fdf7869d9-mnl84
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-022600 describe pod busybox-7fdf7869d9-gph6r busybox-7fdf7869d9-mnl84
helpers_test.go:282: (dbg) kubectl --context ha-022600 describe pod busybox-7fdf7869d9-gph6r busybox-7fdf7869d9-mnl84:

                                                
                                                
-- stdout --
	Name:             busybox-7fdf7869d9-gph6r
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7fdf7869d9
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7fdf7869d9
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h29q5 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-h29q5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m36s (x4 over 18m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	
	
	Name:             busybox-7fdf7869d9-mnl84
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7fdf7869d9
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7fdf7869d9
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xhwqb (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-xhwqb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m36s (x4 over 18m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterClusterStart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (47.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (61.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-022600 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-022600 status --output json -v=7 --alsologtostderr: exit status 2 (32.1463042s)

                                                
                                                
-- stdout --
	[{"Name":"ha-022600","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-022600-m02","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false},{"Name":"ha-022600-m03","Host":"Running","Kubelet":"Running","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 17:18:46.268074   11724 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0416 17:18:46.325961   11724 out.go:291] Setting OutFile to fd 824 ...
	I0416 17:18:46.326960   11724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:18:46.326960   11724 out.go:304] Setting ErrFile to fd 960...
	I0416 17:18:46.326960   11724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:18:46.341097   11724 out.go:298] Setting JSON to true
	I0416 17:18:46.341170   11724 mustload.go:65] Loading cluster: ha-022600
	I0416 17:18:46.341267   11724 notify.go:220] Checking for updates...
	I0416 17:18:46.341950   11724 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 17:18:46.342052   11724 status.go:255] checking status of ha-022600 ...
	I0416 17:18:46.342916   11724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 17:18:48.291842   11724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:18:48.291932   11724 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:18:48.291932   11724 status.go:330] ha-022600 host status = "Running" (err=<nil>)
	I0416 17:18:48.292002   11724 host.go:66] Checking if "ha-022600" exists ...
	I0416 17:18:48.292672   11724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 17:18:50.230817   11724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:18:50.230817   11724 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:18:50.230958   11724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 17:18:52.577249   11724 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 17:18:52.577540   11724 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:18:52.577540   11724 host.go:66] Checking if "ha-022600" exists ...
	I0416 17:18:52.588704   11724 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 17:18:52.588704   11724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 17:18:54.550279   11724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:18:54.550279   11724 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:18:54.550381   11724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 17:18:56.831756   11724 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 17:18:56.832634   11724 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:18:56.832997   11724 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 17:18:56.925196   11724 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.336168s)
	I0416 17:18:56.934793   11724 ssh_runner.go:195] Run: systemctl --version
	I0416 17:18:56.953666   11724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 17:18:56.977172   11724 kubeconfig.go:125] found "ha-022600" server: "https://172.19.95.254:8443"
	I0416 17:18:56.977255   11724 api_server.go:166] Checking apiserver status ...
	I0416 17:18:56.986032   11724 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:18:57.014870   11724 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2100/cgroup
	W0416 17:18:57.033222   11724 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2100/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 17:18:57.043548   11724 ssh_runner.go:195] Run: ls
	I0416 17:18:57.050233   11724 api_server.go:253] Checking apiserver healthz at https://172.19.95.254:8443/healthz ...
	I0416 17:18:57.057318   11724 api_server.go:279] https://172.19.95.254:8443/healthz returned 200:
	ok
	I0416 17:18:57.057658   11724 status.go:422] ha-022600 apiserver status = Running (err=<nil>)
	I0416 17:18:57.057813   11724 status.go:257] ha-022600 status: &{Name:ha-022600 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 17:18:57.057873   11724 status.go:255] checking status of ha-022600-m02 ...
	I0416 17:18:57.058582   11724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 17:18:58.969672   11724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:18:58.969672   11724 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:18:58.969672   11724 status.go:330] ha-022600-m02 host status = "Running" (err=<nil>)
	I0416 17:18:58.969672   11724 host.go:66] Checking if "ha-022600-m02" exists ...
	I0416 17:18:58.971292   11724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 17:19:00.921562   11724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:19:00.921644   11724 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:19:00.921644   11724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:19:03.184396   11724 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 17:19:03.185314   11724 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:19:03.185314   11724 host.go:66] Checking if "ha-022600-m02" exists ...
	I0416 17:19:03.194973   11724 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 17:19:03.194973   11724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 17:19:05.102103   11724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:19:05.102989   11724 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:19:05.102989   11724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:19:07.424553   11724 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 17:19:07.424553   11724 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:19:07.424553   11724 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 17:19:07.511050   11724 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.3158323s)
	I0416 17:19:07.520777   11724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 17:19:07.544365   11724 kubeconfig.go:125] found "ha-022600" server: "https://172.19.95.254:8443"
	I0416 17:19:07.544365   11724 api_server.go:166] Checking apiserver status ...
	I0416 17:19:07.557595   11724 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0416 17:19:07.579313   11724 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0416 17:19:07.579347   11724 status.go:422] ha-022600-m02 apiserver status = Stopped (err=<nil>)
	I0416 17:19:07.579347   11724 status.go:257] ha-022600-m02 status: &{Name:ha-022600-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 17:19:07.579473   11724 status.go:255] checking status of ha-022600-m03 ...
	I0416 17:19:07.580104   11724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m03 ).state
	I0416 17:19:09.564841   11724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:19:09.564841   11724 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:19:09.564841   11724 status.go:330] ha-022600-m03 host status = "Running" (err=<nil>)
	I0416 17:19:09.564841   11724 host.go:66] Checking if "ha-022600-m03" exists ...
	I0416 17:19:09.566009   11724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m03 ).state
	I0416 17:19:11.571274   11724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:19:11.571415   11724 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:19:11.571415   11724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 17:19:13.963100   11724 main.go:141] libmachine: [stdout =====>] : 172.19.93.94
	
	I0416 17:19:13.963649   11724 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:19:13.963649   11724 host.go:66] Checking if "ha-022600-m03" exists ...
	I0416 17:19:13.972706   11724 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 17:19:13.972706   11724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m03 ).state
	I0416 17:19:15.895428   11724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:19:15.895693   11724 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:19:15.895778   11724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 17:19:18.152657   11724 main.go:141] libmachine: [stdout =====>] : 172.19.93.94
	
	I0416 17:19:18.152657   11724 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:19:18.153182   11724 sshutil.go:53] new ssh client: &{IP:172.19.93.94 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m03\id_rsa Username:docker}
	I0416 17:19:18.253525   11724 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.2805767s)
	I0416 17:19:18.262133   11724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 17:19:18.284454   11724 status.go:257] ha-022600-m03 status: &{Name:ha-022600-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:328: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-022600 status --output json -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-022600 -n ha-022600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-022600 -n ha-022600: (10.7383563s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-022600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-022600 logs -n 25: (7.3828204s)
helpers_test.go:252: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:10 UTC | 16 Apr 24 17:10 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:10 UTC | 16 Apr 24 17:10 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:10 UTC | 16 Apr 24 17:10 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:11 UTC | 16 Apr 24 17:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:11 UTC | 16 Apr 24 17:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:11 UTC | 16 Apr 24 17:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:11 UTC | 16 Apr 24 17:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-gph6r --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-mnl84 --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | busybox-7fdf7869d9-rpfpf --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-gph6r --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-mnl84 --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | busybox-7fdf7869d9-rpfpf --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-gph6r -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-mnl84 -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | busybox-7fdf7869d9-rpfpf -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC | 16 Apr 24 17:13 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC |                     |
	|         | busybox-7fdf7869d9-gph6r             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC |                     |
	|         | busybox-7fdf7869d9-mnl84             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC | 16 Apr 24 17:13 UTC |
	|         | busybox-7fdf7869d9-rpfpf             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC |                     |
	|         | busybox-7fdf7869d9-rpfpf -- sh       |           |                   |                |                     |                     |
	|         | -c ping -c 1 172.19.80.1             |           |                   |                |                     |                     |
	| node    | add -p ha-022600 -v=7                | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC | 16 Apr 24 17:16 UTC |
	|         | --alsologtostderr                    |           |                   |                |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 16:53:50
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 16:53:50.116950   12816 out.go:291] Setting OutFile to fd 784 ...
	I0416 16:53:50.117952   12816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:53:50.117952   12816 out.go:304] Setting ErrFile to fd 696...
	I0416 16:53:50.117952   12816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:53:50.138920   12816 out.go:298] Setting JSON to false
	I0416 16:53:50.141501   12816 start.go:129] hostinfo: {"hostname":"minikube5","uptime":24059,"bootTime":1713262370,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0416 16:53:50.141501   12816 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0416 16:53:50.143700   12816 out.go:177] * [ha-022600] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0416 16:53:50.144387   12816 notify.go:220] Checking for updates...
	I0416 16:53:50.144982   12816 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:53:50.145881   12816 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 16:53:50.146373   12816 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0416 16:53:50.146987   12816 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 16:53:50.147788   12816 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 16:53:50.149250   12816 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 16:53:54.959514   12816 out.go:177] * Using the hyperv driver based on user configuration
	I0416 16:53:54.959811   12816 start.go:297] selected driver: hyperv
	I0416 16:53:54.959811   12816 start.go:901] validating driver "hyperv" against <nil>
	I0416 16:53:54.959811   12816 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 16:53:55.002641   12816 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 16:53:55.003374   12816 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 16:53:55.003816   12816 cni.go:84] Creating CNI manager for ""
	I0416 16:53:55.003816   12816 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0416 16:53:55.003816   12816 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0416 16:53:55.003816   12816 start.go:340] cluster config:
	{Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:53:55.003816   12816 iso.go:125] acquiring lock: {Name:mka3f8eef32f5becd06d05d1d837c2a92a8fa70c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 16:53:55.005700   12816 out.go:177] * Starting "ha-022600" primary control-plane node in "ha-022600" cluster
	I0416 16:53:55.006053   12816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:53:55.006397   12816 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0416 16:53:55.006397   12816 cache.go:56] Caching tarball of preloaded images
	I0416 16:53:55.006539   12816 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 16:53:55.006809   12816 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 16:53:55.007075   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:53:55.007821   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json: {Name:mkc2f9747189bfa0db5ea21e93e1afafc0e89eb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:53:55.008149   12816 start.go:360] acquireMachinesLock for ha-022600: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:53:55.009151   12816 start.go:364] duration metric: took 1.0024ms to acquireMachinesLock for "ha-022600"
	I0416 16:53:55.009151   12816 start.go:93] Provisioning new machine with config: &{Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 16:53:55.009151   12816 start.go:125] createHost starting for "" (driver="hyperv")
	I0416 16:53:55.010175   12816 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 16:53:55.010397   12816 start.go:159] libmachine.API.Create for "ha-022600" (driver="hyperv")
	I0416 16:53:55.010397   12816 client.go:168] LocalClient.Create starting
	I0416 16:53:55.010740   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0416 16:53:55.011023   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:53:55.011023   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:53:55.011200   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0416 16:53:55.011403   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:53:55.011403   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:53:55.011541   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0416 16:53:56.852843   12816 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0416 16:53:56.852843   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:53:56.853713   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0416 16:53:58.346838   12816 main.go:141] libmachine: [stdout =====>] : False
	
	I0416 16:53:58.346838   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:53:58.347399   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:53:59.667129   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:53:59.667129   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:53:59.667644   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:54:02.789332   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:54:02.789332   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:02.791736   12816 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 16:54:03.131710   12816 main.go:141] libmachine: Creating SSH key...
	I0416 16:54:03.273248   12816 main.go:141] libmachine: Creating VM...
	I0416 16:54:03.273248   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:54:05.824835   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:54:05.824835   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:05.824937   12816 main.go:141] libmachine: Using switch "Default Switch"
	I0416 16:54:05.825022   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:54:07.398351   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:54:07.398635   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:07.398635   12816 main.go:141] libmachine: Creating VHD
	I0416 16:54:07.398733   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0416 16:54:10.982944   12816 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : E9EB5342-E929-43B6-8B97-D7BDD354CEE1
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0416 16:54:10.983213   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:10.983213   12816 main.go:141] libmachine: Writing magic tar header
	I0416 16:54:10.983213   12816 main.go:141] libmachine: Writing SSH key tar header
	I0416 16:54:10.992883   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0416 16:54:13.950584   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:13.950584   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:13.950584   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\disk.vhd' -SizeBytes 20000MB
	I0416 16:54:16.287736   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:16.287736   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:16.287736   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-022600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0416 16:54:19.439740   12816 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-022600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0416 16:54:19.439740   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:19.439740   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-022600 -DynamicMemoryEnabled $false
	I0416 16:54:21.396684   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:21.396684   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:21.397696   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-022600 -Count 2
	I0416 16:54:23.301369   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:23.301369   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:23.302296   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-022600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\boot2docker.iso'
	I0416 16:54:25.540957   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:25.540957   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:25.541060   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-022600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\disk.vhd'
	I0416 16:54:27.919093   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:27.919302   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:27.919302   12816 main.go:141] libmachine: Starting VM...
	I0416 16:54:27.919462   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-022600
	I0416 16:54:30.480037   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:30.480279   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:30.480279   12816 main.go:141] libmachine: Waiting for host to start...
	I0416 16:54:30.480279   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:32.483346   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:32.484152   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:32.484152   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:34.785082   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:34.785271   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:35.799483   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:37.788691   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:37.788691   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:37.788898   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:40.058231   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:40.058742   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:41.064074   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:43.063862   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:43.063862   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:43.063862   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:45.301253   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:45.301420   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:46.309647   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:48.337653   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:48.337653   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:48.337653   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:50.614494   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:50.615195   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:51.620909   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:53.639317   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:53.640351   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:53.640405   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:55.942630   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:54:55.943393   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:55.943471   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:57.836545   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:57.836545   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:57.837395   12816 machine.go:94] provisionDockerMachine start ...
	I0416 16:54:57.837474   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:59.762683   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:59.763360   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:59.763440   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:02.003751   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:02.003751   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:02.010689   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:02.023158   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:02.023158   12816 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 16:55:02.152140   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 16:55:02.152244   12816 buildroot.go:166] provisioning hostname "ha-022600"
	I0416 16:55:02.152322   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:03.956913   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:03.956913   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:03.957618   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:06.305236   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:06.305236   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:06.309822   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:06.310484   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:06.310484   12816 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-022600 && echo "ha-022600" | sudo tee /etc/hostname
	I0416 16:55:06.479074   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-022600
	
	I0416 16:55:06.479182   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:08.433073   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:08.433999   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:08.433999   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:10.792893   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:10.792893   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:10.796713   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:10.797321   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:10.797321   12816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-022600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-022600/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-022600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:55:10.944702   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:55:10.944870   12816 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 16:55:10.944983   12816 buildroot.go:174] setting up certificates
	I0416 16:55:10.944983   12816 provision.go:84] configureAuth start
	I0416 16:55:10.945092   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:12.932736   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:12.932736   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:12.933614   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:15.203758   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:15.203758   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:15.203758   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:17.088226   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:17.088226   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:17.088334   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:19.325791   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:19.326294   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:19.326294   12816 provision.go:143] copyHostCerts
	I0416 16:55:19.326294   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 16:55:19.326294   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 16:55:19.326294   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 16:55:19.326900   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 16:55:19.328097   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 16:55:19.328097   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 16:55:19.328097   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 16:55:19.328097   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 16:55:19.329417   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 16:55:19.329417   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 16:55:19.329417   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 16:55:19.330063   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 16:55:19.330726   12816 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-022600 san=[127.0.0.1 172.19.81.207 ha-022600 localhost minikube]
	I0416 16:55:19.539117   12816 provision.go:177] copyRemoteCerts
	I0416 16:55:19.547114   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:55:19.547114   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:21.440985   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:21.440985   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:21.440985   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:23.726564   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:23.726564   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:23.727019   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:55:23.834423   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.287066s)
	I0416 16:55:23.834577   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 16:55:23.835008   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 16:55:23.874966   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 16:55:23.875470   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0416 16:55:23.923921   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 16:55:23.923921   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 16:55:23.965042   12816 provision.go:87] duration metric: took 13.0192422s to configureAuth
	I0416 16:55:23.965042   12816 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:55:23.965741   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:55:23.965827   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:25.905339   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:25.905903   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:25.905986   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:28.170079   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:28.170419   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:28.173356   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:28.173937   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:28.173937   12816 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 16:55:28.301727   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 16:55:28.301727   12816 buildroot.go:70] root file system type: tmpfs
	I0416 16:55:28.302425   12816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 16:55:28.302506   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:30.181808   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:30.181808   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:30.181889   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:32.394860   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:32.394860   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:32.398667   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:32.399299   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:32.399475   12816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 16:55:32.556658   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 16:55:32.556887   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:34.446928   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:34.446969   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:34.447053   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:36.709442   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:36.710242   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:36.714111   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:36.714437   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:36.714437   12816 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 16:55:38.655929   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 16:55:38.655929   12816 machine.go:97] duration metric: took 40.8162201s to provisionDockerMachine
	I0416 16:55:38.656036   12816 client.go:171] duration metric: took 1m43.6397622s to LocalClient.Create
	I0416 16:55:38.656036   12816 start.go:167] duration metric: took 1m43.6397622s to libmachine.API.Create "ha-022600"
	I0416 16:55:38.656036   12816 start.go:293] postStartSetup for "ha-022600" (driver="hyperv")
	I0416 16:55:38.656036   12816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:55:38.665072   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:55:38.665072   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:40.514910   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:40.514910   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:40.515910   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:42.764754   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:42.765404   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:42.765404   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:55:42.879399   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2140881s)
	I0416 16:55:42.892410   12816 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:55:42.899117   12816 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:55:42.899117   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 16:55:42.899734   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 16:55:42.901086   12816 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 16:55:42.901154   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 16:55:42.911237   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 16:55:42.927664   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 16:55:42.975440   12816 start.go:296] duration metric: took 4.3191592s for postStartSetup
	I0416 16:55:42.977201   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:44.830945   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:44.830945   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:44.831562   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:47.134349   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:47.134788   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:47.135000   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:55:47.137270   12816 start.go:128] duration metric: took 1m52.1217609s to createHost
	I0416 16:55:47.137270   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:49.024055   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:49.024055   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:49.024657   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:51.238446   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:51.238526   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:51.242455   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:51.243052   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:51.243052   12816 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 16:55:51.369469   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713286551.540248133
	
	I0416 16:55:51.369575   12816 fix.go:216] guest clock: 1713286551.540248133
	I0416 16:55:51.369575   12816 fix.go:229] Guest: 2024-04-16 16:55:51.540248133 +0000 UTC Remote: 2024-04-16 16:55:47.1372703 +0000 UTC m=+117.146546101 (delta=4.402977833s)
	I0416 16:55:51.369790   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:53.407581   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:53.407727   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:53.407727   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:55.663769   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:55.663769   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:55.667543   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:55.667688   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:55.667688   12816 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713286551
	I0416 16:55:55.810591   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 16:55:51 UTC 2024
	
	I0416 16:55:55.810700   12816 fix.go:236] clock set: Tue Apr 16 16:55:51 UTC 2024
	 (err=<nil>)
	I0416 16:55:55.810700   12816 start.go:83] releasing machines lock for "ha-022600", held for 2m0.7946995s
	I0416 16:55:55.810965   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:57.710878   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:57.710878   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:57.711672   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:59.985139   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:59.985210   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:59.988730   12816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:55:59.988803   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:59.998550   12816 ssh_runner.go:195] Run: cat /version.json
	I0416 16:55:59.998550   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:01.993954   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:01.993954   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:01.993954   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:01.995788   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:01.995959   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:01.996084   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:04.379274   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:56:04.379356   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:04.379701   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:56:04.391360   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:56:04.392161   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:04.392520   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:56:04.469159   12816 ssh_runner.go:235] Completed: cat /version.json: (4.4703555s)
	I0416 16:56:04.479363   12816 ssh_runner.go:195] Run: systemctl --version
	I0416 16:56:04.584079   12816 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5950892s)
	I0416 16:56:04.593130   12816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 16:56:04.602217   12816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:56:04.610705   12816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:56:04.639084   12816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 16:56:04.639119   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:56:04.639119   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:56:04.684127   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 16:56:04.713899   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 16:56:04.734297   12816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 16:56:04.745020   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 16:56:04.776657   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:56:04.806087   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 16:56:04.854166   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:56:04.890388   12816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:56:04.918140   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 16:56:04.946595   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 16:56:04.975408   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 16:56:05.001633   12816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:56:05.028505   12816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:56:05.053299   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:05.230466   12816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 16:56:05.260161   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:56:05.269988   12816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 16:56:05.302694   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:56:05.335619   12816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 16:56:05.368663   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:56:05.402792   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:56:05.435612   12816 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 16:56:05.483431   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:56:05.505797   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:56:05.548843   12816 ssh_runner.go:195] Run: which cri-dockerd
	I0416 16:56:05.563980   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 16:56:05.582552   12816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 16:56:05.624048   12816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 16:56:05.804495   12816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 16:56:05.984936   12816 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 16:56:05.985183   12816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 16:56:06.032244   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:06.217075   12816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 16:56:08.662995   12816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4457805s)
	I0416 16:56:08.670977   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0416 16:56:08.701542   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 16:56:08.730698   12816 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0416 16:56:08.941813   12816 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0416 16:56:09.145939   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:09.331138   12816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0416 16:56:09.370232   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 16:56:09.409657   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:09.615575   12816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0416 16:56:09.726879   12816 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0416 16:56:09.737760   12816 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0416 16:56:09.746450   12816 start.go:562] Will wait 60s for crictl version
	I0416 16:56:09.755840   12816 ssh_runner.go:195] Run: which crictl
	I0416 16:56:09.771470   12816 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 16:56:09.827603   12816 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0416 16:56:09.836477   12816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 16:56:09.874967   12816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 16:56:09.907967   12816 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
	I0416 16:56:09.908249   12816 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:6f:a4 Flags:up|broadcast|multicast|running}
	I0416 16:56:09.917049   12816 ip.go:210] interface addr: fe80::6b96:eca7:5afa:def5/64
	I0416 16:56:09.917049   12816 ip.go:210] interface addr: 172.19.80.1/20
	I0416 16:56:09.924842   12816 ssh_runner.go:195] Run: grep 172.19.80.1	host.minikube.internal$ /etc/hosts
	I0416 16:56:09.931830   12816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:56:09.968931   12816 kubeadm.go:877] updating cluster {Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 16:56:09.968931   12816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:56:09.975955   12816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 16:56:09.996899   12816 docker.go:685] Got preloaded images: 
	I0416 16:56:09.996899   12816 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0416 16:56:10.008276   12816 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0416 16:56:10.035609   12816 ssh_runner.go:195] Run: which lz4
	I0416 16:56:10.042582   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0416 16:56:10.050849   12816 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 16:56:10.058074   12816 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 16:56:10.058074   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0416 16:56:11.721910   12816 docker.go:649] duration metric: took 1.6789563s to copy over tarball
	I0416 16:56:11.731181   12816 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 16:56:20.333529   12816 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.60186s)
	I0416 16:56:20.333529   12816 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 16:56:20.400516   12816 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0416 16:56:20.419486   12816 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0416 16:56:20.469018   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:20.655543   12816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 16:56:23.229259   12816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5734984s)
	I0416 16:56:23.240705   12816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 16:56:23.262332   12816 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0416 16:56:23.262383   12816 cache_images.go:84] Images are preloaded, skipping loading
	I0416 16:56:23.262383   12816 kubeadm.go:928] updating node { 172.19.81.207 8443 v1.29.3 docker true true} ...
	I0416 16:56:23.262383   12816 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-022600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.81.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 16:56:23.270008   12816 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0416 16:56:23.307277   12816 cni.go:84] Creating CNI manager for ""
	I0416 16:56:23.307277   12816 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0416 16:56:23.307362   12816 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 16:56:23.307406   12816 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.81.207 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-022600 NodeName:ha-022600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.81.207"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.81.207 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 16:56:23.307691   12816 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.81.207
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-022600"
	  kubeletExtraArgs:
	    node-ip: 172.19.81.207
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.81.207"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 16:56:23.307749   12816 kube-vip.go:111] generating kube-vip config ...
	I0416 16:56:23.318492   12816 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0416 16:56:23.343950   12816 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0416 16:56:23.344258   12816 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.95.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0416 16:56:23.353585   12816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 16:56:23.370542   12816 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 16:56:23.379813   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0416 16:56:23.397865   12816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0416 16:56:23.432291   12816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 16:56:23.462868   12816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0416 16:56:23.492579   12816 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1351 bytes)
	I0416 16:56:23.534977   12816 ssh_runner.go:195] Run: grep 172.19.95.254	control-plane.minikube.internal$ /etc/hosts
	I0416 16:56:23.542734   12816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.95.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:56:23.575719   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:23.754395   12816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 16:56:23.781462   12816 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600 for IP: 172.19.81.207
	I0416 16:56:23.781462   12816 certs.go:194] generating shared ca certs ...
	I0416 16:56:23.781462   12816 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:23.782411   12816 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0416 16:56:23.782411   12816 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0416 16:56:23.783651   12816 certs.go:256] generating profile certs ...
	I0416 16:56:23.784402   12816 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.key
	I0416 16:56:23.784569   12816 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.crt with IP's: []
	I0416 16:56:23.984047   12816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.crt ...
	I0416 16:56:23.984047   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.crt: {Name:mk3ebdcb7f076a09a399313f7ed3edf14403a6a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:23.985977   12816 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.key ...
	I0416 16:56:23.985977   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.key: {Name:mk94343a485b04f4b25a0ccd3245e197e7ecbec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:23.986215   12816 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648
	I0416 16:56:23.987265   12816 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.81.207 172.19.95.254]
	I0416 16:56:24.317716   12816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648 ...
	I0416 16:56:24.317716   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648: {Name:mk30f7000427979a1bcf8d6fc3995d1f7ccc170c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.319797   12816 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648 ...
	I0416 16:56:24.319797   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648: {Name:mk95e9e3e0f96031ef005f6c36470c216303a111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.320163   12816 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt
	I0416 16:56:24.331288   12816 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key
	I0416 16:56:24.332214   12816 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key
	I0416 16:56:24.332214   12816 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt with IP's: []
	I0416 16:56:24.406574   12816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt ...
	I0416 16:56:24.406574   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt: {Name:mk73158a02cd8861e90a3b76d50704b360c358ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.407013   12816 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key ...
	I0416 16:56:24.407013   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key: {Name:mk6842e2af8fadaf278ec7592edd5bd96f07c8ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.408078   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 16:56:24.408945   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0416 16:56:24.409732   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0416 16:56:24.417870   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0416 16:56:24.418145   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem (1338 bytes)
	W0416 16:56:24.418533   12816 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324_empty.pem, impossibly tiny 0 bytes
	I0416 16:56:24.418533   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0416 16:56:24.418811   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0416 16:56:24.418990   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0416 16:56:24.419161   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0416 16:56:24.419368   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem (1708 bytes)
	I0416 16:56:24.419647   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /usr/share/ca-certificates/53242.pem
	I0416 16:56:24.419767   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:24.419867   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem -> /usr/share/ca-certificates/5324.pem
	I0416 16:56:24.420003   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 16:56:24.466985   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 16:56:24.509816   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 16:56:24.554817   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 16:56:24.603006   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0416 16:56:24.646596   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 16:56:24.694120   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 16:56:24.741669   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 16:56:24.785888   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /usr/share/ca-certificates/53242.pem (1708 bytes)
	I0416 16:56:24.829403   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 16:56:24.891821   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem --> /usr/share/ca-certificates/5324.pem (1338 bytes)
	I0416 16:56:24.933883   12816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 16:56:24.975091   12816 ssh_runner.go:195] Run: openssl version
	I0416 16:56:24.994129   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/53242.pem && ln -fs /usr/share/ca-certificates/53242.pem /etc/ssl/certs/53242.pem"
	I0416 16:56:25.021821   12816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/53242.pem
	I0416 16:56:25.028512   12816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:35 /usr/share/ca-certificates/53242.pem
	I0416 16:56:25.037989   12816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/53242.pem
	I0416 16:56:25.054924   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/53242.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 16:56:25.080011   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 16:56:25.106815   12816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:25.113980   12816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:25.126339   12816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:25.144599   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 16:56:25.170309   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5324.pem && ln -fs /usr/share/ca-certificates/5324.pem /etc/ssl/certs/5324.pem"
	I0416 16:56:25.199080   12816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5324.pem
	I0416 16:56:25.206080   12816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:35 /usr/share/ca-certificates/5324.pem
	I0416 16:56:25.214031   12816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5324.pem
	I0416 16:56:25.237026   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5324.pem /etc/ssl/certs/51391683.0"
	I0416 16:56:25.266837   12816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 16:56:25.273408   12816 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 16:56:25.273858   12816 kubeadm.go:391] StartCluster: {Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clu
sterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:56:25.281991   12816 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0416 16:56:25.314891   12816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0416 16:56:25.342248   12816 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 16:56:25.368032   12816 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 16:56:25.385737   12816 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 16:56:25.385737   12816 kubeadm.go:156] found existing configuration files:
	
	I0416 16:56:25.393851   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 16:56:25.410393   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 16:56:25.421874   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 16:56:25.453762   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 16:56:25.468769   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 16:56:25.477353   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 16:56:25.501898   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 16:56:25.515888   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 16:56:25.524885   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 16:56:25.548518   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 16:56:25.563660   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 16:56:25.572269   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 16:56:25.587981   12816 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 16:56:25.791977   12816 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 16:56:25.791977   12816 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 16:56:25.958638   12816 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 16:56:25.959035   12816 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 16:56:25.959403   12816 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 16:56:26.228464   12816 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 16:56:26.229544   12816 out.go:204]   - Generating certificates and keys ...
	I0416 16:56:26.229862   12816 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 16:56:26.230882   12816 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 16:56:26.359024   12816 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0416 16:56:26.583044   12816 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0416 16:56:26.715543   12816 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0416 16:56:27.014892   12816 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0416 16:56:27.414264   12816 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0416 16:56:27.414467   12816 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-022600 localhost] and IPs [172.19.81.207 127.0.0.1 ::1]
	I0416 16:56:27.642396   12816 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0416 16:56:27.642770   12816 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-022600 localhost] and IPs [172.19.81.207 127.0.0.1 ::1]
	I0416 16:56:27.844566   12816 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0416 16:56:28.089475   12816 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0416 16:56:28.543900   12816 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0416 16:56:28.548586   12816 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 16:56:29.051829   12816 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 16:56:29.485679   12816 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 16:56:29.830737   12816 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 16:56:30.055972   12816 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 16:56:30.305118   12816 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 16:56:30.305118   12816 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 16:56:30.310446   12816 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 16:56:30.311113   12816 out.go:204]   - Booting up control plane ...
	I0416 16:56:30.311289   12816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 16:56:30.311970   12816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 16:56:30.317049   12816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 16:56:30.342443   12816 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 16:56:30.345140   12816 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 16:56:30.345140   12816 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 16:56:30.526725   12816 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 16:56:37.142045   12816 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.615653 seconds
	I0416 16:56:37.159025   12816 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 16:56:37.175108   12816 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 16:56:37.707867   12816 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 16:56:37.708715   12816 kubeadm.go:309] [mark-control-plane] Marking the node ha-022600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 16:56:38.222729   12816 kubeadm.go:309] [bootstrap-token] Using token: a3r5qn.ikva200bfcppykg5
	I0416 16:56:38.223819   12816 out.go:204]   - Configuring RBAC rules ...
	I0416 16:56:38.224231   12816 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 16:56:38.232416   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 16:56:38.244982   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 16:56:38.249926   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 16:56:38.257723   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 16:56:38.262029   12816 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 16:56:38.279883   12816 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 16:56:38.592701   12816 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 16:56:38.638273   12816 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 16:56:38.639572   12816 kubeadm.go:309] 
	I0416 16:56:38.640154   12816 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 16:56:38.640230   12816 kubeadm.go:309] 
	I0416 16:56:38.640982   12816 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 16:56:38.641038   12816 kubeadm.go:309] 
	I0416 16:56:38.641299   12816 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 16:56:38.641581   12816 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 16:56:38.641765   12816 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 16:56:38.641765   12816 kubeadm.go:309] 
	I0416 16:56:38.641989   12816 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 16:56:38.642031   12816 kubeadm.go:309] 
	I0416 16:56:38.642184   12816 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 16:56:38.642228   12816 kubeadm.go:309] 
	I0416 16:56:38.642350   12816 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 16:56:38.642660   12816 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 16:56:38.642862   12816 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 16:56:38.642900   12816 kubeadm.go:309] 
	I0416 16:56:38.643166   12816 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 16:56:38.643426   12816 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 16:56:38.643426   12816 kubeadm.go:309] 
	I0416 16:56:38.643613   12816 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token a3r5qn.ikva200bfcppykg5 \
	I0416 16:56:38.643867   12816 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c \
	I0416 16:56:38.643909   12816 kubeadm.go:309] 	--control-plane 
	I0416 16:56:38.643961   12816 kubeadm.go:309] 
	I0416 16:56:38.644233   12816 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 16:56:38.644272   12816 kubeadm.go:309] 
	I0416 16:56:38.644444   12816 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token a3r5qn.ikva200bfcppykg5 \
	I0416 16:56:38.644734   12816 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c 
	I0416 16:56:38.647455   12816 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 16:56:38.647488   12816 cni.go:84] Creating CNI manager for ""
	I0416 16:56:38.647539   12816 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0416 16:56:38.648246   12816 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0416 16:56:38.657141   12816 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0416 16:56:38.671263   12816 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0416 16:56:38.671263   12816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0416 16:56:38.722410   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0416 16:56:39.265655   12816 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 16:56:39.279279   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:39.279279   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-022600 minikube.k8s.io/updated_at=2024_04_16T16_56_39_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4 minikube.k8s.io/name=ha-022600 minikube.k8s.io/primary=true
	I0416 16:56:39.290244   12816 ops.go:34] apiserver oom_adj: -16
	I0416 16:56:39.441163   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:39.950155   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:40.453751   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:40.955147   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:41.455931   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:41.953044   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:42.454696   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:42.949299   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:43.454962   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:43.953447   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:44.456402   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:44.956686   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:45.449476   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:45.951602   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:46.451988   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:46.949212   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:47.449356   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:47.950703   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:48.458777   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:48.956811   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:49.456669   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:49.943595   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:50.443906   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:50.950503   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:51.454863   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:51.944285   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:52.083562   12816 kubeadm.go:1107] duration metric: took 12.8170858s to wait for elevateKubeSystemPrivileges
	W0416 16:56:52.083816   12816 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 16:56:52.083816   12816 kubeadm.go:393] duration metric: took 26.808438s to StartCluster
	I0416 16:56:52.083816   12816 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:52.084214   12816 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:56:52.086643   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:52.088384   12816 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 16:56:52.088384   12816 start.go:240] waiting for startup goroutines ...
	I0416 16:56:52.088384   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0416 16:56:52.088384   12816 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 16:56:52.088630   12816 addons.go:69] Setting storage-provisioner=true in profile "ha-022600"
	I0416 16:56:52.088732   12816 addons.go:234] Setting addon storage-provisioner=true in "ha-022600"
	I0416 16:56:52.088732   12816 addons.go:69] Setting default-storageclass=true in profile "ha-022600"
	I0416 16:56:52.088850   12816 host.go:66] Checking if "ha-022600" exists ...
	I0416 16:56:52.088964   12816 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-022600"
	I0416 16:56:52.088964   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:56:52.090289   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:52.090671   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:52.207597   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.80.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0416 16:56:52.469504   12816 start.go:946] {"host.minikube.internal": 172.19.80.1} host record injected into CoreDNS's ConfigMap
	I0416 16:56:54.164683   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:54.164683   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:54.165583   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:54.165635   12816 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:56:54.165635   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:54.166734   12816 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 16:56:54.166340   12816 kapi.go:59] client config for ha-022600: &rest.Config{Host:"https://172.19.95.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-022600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-022600\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef16c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 16:56:54.167133   12816 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 16:56:54.167133   12816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 16:56:54.167133   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:54.167791   12816 cert_rotation.go:137] Starting client certificate rotation controller
	I0416 16:56:54.168180   12816 addons.go:234] Setting addon default-storageclass=true in "ha-022600"
	I0416 16:56:54.168347   12816 host.go:66] Checking if "ha-022600" exists ...
	I0416 16:56:54.169251   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:56.312581   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:56.312988   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:56.313046   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:56.313270   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:56.313270   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:56.313270   12816 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 16:56:56.313270   12816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 16:56:56.313270   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:58.330392   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:58.330392   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:58.330966   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:58.735727   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:56:58.735876   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:58.736103   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:56:58.898469   12816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 16:57:00.675802   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:57:00.675802   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:00.676245   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:57:00.828151   12816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 16:57:01.248041   12816 round_trippers.go:463] GET https://172.19.95.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0416 16:57:01.248041   12816 round_trippers.go:469] Request Headers:
	I0416 16:57:01.248041   12816 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:57:01.248041   12816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:57:01.261890   12816 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0416 16:57:01.262478   12816 round_trippers.go:463] PUT https://172.19.95.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0416 16:57:01.262478   12816 round_trippers.go:469] Request Headers:
	I0416 16:57:01.262478   12816 round_trippers.go:473]     Content-Type: application/json
	I0416 16:57:01.262478   12816 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:57:01.262478   12816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:57:01.268964   12816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 16:57:01.269995   12816 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0416 16:57:01.270495   12816 addons.go:505] duration metric: took 9.181591s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0416 16:57:01.270576   12816 start.go:245] waiting for cluster config update ...
	I0416 16:57:01.270618   12816 start.go:254] writing updated cluster config ...
	I0416 16:57:01.271859   12816 out.go:177] 
	I0416 16:57:01.284169   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:57:01.284169   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:57:01.285951   12816 out.go:177] * Starting "ha-022600-m02" control-plane node in "ha-022600" cluster
	I0416 16:57:01.286952   12816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:57:01.286952   12816 cache.go:56] Caching tarball of preloaded images
	I0416 16:57:01.286952   12816 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 16:57:01.286952   12816 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 16:57:01.286952   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:57:01.296247   12816 start.go:360] acquireMachinesLock for ha-022600-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:57:01.297324   12816 start.go:364] duration metric: took 1.0773ms to acquireMachinesLock for "ha-022600-m02"
	I0416 16:57:01.297559   12816 start.go:93] Provisioning new machine with config: &{Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 16:57:01.297559   12816 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0416 16:57:01.297559   12816 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 16:57:01.297559   12816 start.go:159] libmachine.API.Create for "ha-022600" (driver="hyperv")
	I0416 16:57:01.297559   12816 client.go:168] LocalClient.Create starting
	I0416 16:57:01.298838   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0416 16:57:01.299147   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:57:01.299147   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:57:01.299293   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0416 16:57:01.299468   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:57:01.299468   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:57:01.299468   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0416 16:57:03.017072   12816 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0416 16:57:03.017279   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:03.017366   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0416 16:57:04.580895   12816 main.go:141] libmachine: [stdout =====>] : False
	
	I0416 16:57:04.580895   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:04.580895   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:57:05.984295   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:57:05.984295   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:05.984295   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:57:09.314760   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:57:09.314760   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:09.316740   12816 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 16:57:09.669552   12816 main.go:141] libmachine: Creating SSH key...
	I0416 16:57:10.010472   12816 main.go:141] libmachine: Creating VM...
	I0416 16:57:10.010472   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:57:12.690022   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:57:12.690022   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:12.690107   12816 main.go:141] libmachine: Using switch "Default Switch"
	I0416 16:57:12.690185   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:57:14.267157   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:57:14.267157   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:14.267157   12816 main.go:141] libmachine: Creating VHD
	I0416 16:57:14.267157   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0416 16:57:17.749511   12816 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : FE960248-03C1-43D6-B7AE-A60D4C86308B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0416 16:57:17.749511   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:17.749511   12816 main.go:141] libmachine: Writing magic tar header
	I0416 16:57:17.749511   12816 main.go:141] libmachine: Writing SSH key tar header
	I0416 16:57:17.758158   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0416 16:57:20.709379   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:20.709950   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:20.710019   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\disk.vhd' -SizeBytes 20000MB
	I0416 16:57:23.025729   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:23.025729   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:23.025729   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-022600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0416 16:57:26.131923   12816 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-022600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0416 16:57:26.131923   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:26.131923   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-022600-m02 -DynamicMemoryEnabled $false
	I0416 16:57:28.159153   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:28.159229   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:28.159409   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-022600-m02 -Count 2
	I0416 16:57:30.126033   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:30.126033   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:30.126033   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-022600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\boot2docker.iso'
	I0416 16:57:32.420739   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:32.421735   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:32.421878   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-022600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\disk.vhd'
	I0416 16:57:34.779822   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:34.779822   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:34.779822   12816 main.go:141] libmachine: Starting VM...
	I0416 16:57:34.780971   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-022600-m02
	I0416 16:57:37.369505   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:37.369687   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:37.369687   12816 main.go:141] libmachine: Waiting for host to start...
	I0416 16:57:37.369767   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:39.415029   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:39.415029   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:39.415286   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:41.685132   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:41.685132   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:42.700464   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:44.674039   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:44.674039   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:44.674039   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:46.993492   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:46.993492   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:48.000886   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:49.992438   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:49.992894   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:49.992930   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:52.274971   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:52.274971   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:53.290891   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:55.287716   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:55.287962   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:55.288037   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:57.564053   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:57.564053   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:58.572803   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:00.584542   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:00.584542   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:00.584542   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:02.905327   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:02.905327   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:02.905391   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:04.899133   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:04.899133   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:04.899479   12816 machine.go:94] provisionDockerMachine start ...
	I0416 16:58:04.899479   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:06.914221   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:06.914221   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:06.914869   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:09.273511   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:09.273546   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:09.277783   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:09.278406   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:09.278406   12816 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 16:58:09.413281   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 16:58:09.413281   12816 buildroot.go:166] provisioning hostname "ha-022600-m02"
	I0416 16:58:09.413281   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:11.438626   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:11.438626   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:11.439079   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:13.801181   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:13.801181   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:13.805295   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:13.805684   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:13.805684   12816 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-022600-m02 && echo "ha-022600-m02" | sudo tee /etc/hostname
	I0416 16:58:13.957933   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-022600-m02
	
	I0416 16:58:13.958021   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:15.863768   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:15.863768   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:15.863768   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:18.176996   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:18.178002   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:18.182057   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:18.182681   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:18.182681   12816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-022600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-022600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-022600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:58:18.315751   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:58:18.315853   12816 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 16:58:18.315853   12816 buildroot.go:174] setting up certificates
	I0416 16:58:18.315853   12816 provision.go:84] configureAuth start
	I0416 16:58:18.315853   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:20.243862   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:20.243928   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:20.243928   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:22.525833   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:22.525945   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:22.526057   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:24.418671   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:24.418671   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:24.418894   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:26.735560   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:26.735560   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:26.735560   12816 provision.go:143] copyHostCerts
	I0416 16:58:26.736546   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 16:58:26.736627   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 16:58:26.736627   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 16:58:26.736627   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 16:58:26.737290   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 16:58:26.737900   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 16:58:26.737900   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 16:58:26.738191   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 16:58:26.738908   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 16:58:26.738977   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 16:58:26.738977   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 16:58:26.738977   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 16:58:26.739652   12816 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-022600-m02 san=[127.0.0.1 172.19.80.125 ha-022600-m02 localhost minikube]
	I0416 16:58:26.917277   12816 provision.go:177] copyRemoteCerts
	I0416 16:58:26.926308   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:58:26.926600   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:28.829360   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:28.829360   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:28.830343   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:31.113681   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:31.113681   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:31.113681   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:58:31.229222   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3026703s)
	I0416 16:58:31.229222   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 16:58:31.229700   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 16:58:31.279666   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 16:58:31.280307   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0416 16:58:31.328101   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 16:58:31.328245   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 16:58:31.382563   12816 provision.go:87] duration metric: took 13.065969s to configureAuth
	I0416 16:58:31.382563   12816 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:58:31.383343   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:58:31.383343   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:33.331199   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:33.331199   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:33.331275   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:35.653673   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:35.653721   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:35.656855   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:35.657430   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:35.657430   12816 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 16:58:35.781565   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 16:58:35.781565   12816 buildroot.go:70] root file system type: tmpfs
	I0416 16:58:35.781565   12816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 16:58:35.782090   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:37.695478   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:37.695478   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:37.696344   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:39.956169   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:39.956169   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:39.961057   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:39.961515   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:39.961564   12816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.81.207"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 16:58:40.123664   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.81.207
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 16:58:40.123818   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:42.064878   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:42.064974   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:42.065152   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:44.326252   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:44.326252   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:44.330103   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:44.330731   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:44.330731   12816 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 16:58:46.283136   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 16:58:46.283253   12816 machine.go:97] duration metric: took 41.3814279s to provisionDockerMachine
	I0416 16:58:46.283253   12816 client.go:171] duration metric: took 1m44.9797412s to LocalClient.Create
	I0416 16:58:46.283253   12816 start.go:167] duration metric: took 1m44.9797412s to libmachine.API.Create "ha-022600"
	I0416 16:58:46.283253   12816 start.go:293] postStartSetup for "ha-022600-m02" (driver="hyperv")
	I0416 16:58:46.283345   12816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:58:46.292724   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:58:46.292724   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:48.207625   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:48.207625   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:48.207625   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:50.480821   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:50.480821   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:50.480821   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:58:50.575284   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2823171s)
	I0416 16:58:50.584260   12816 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:58:50.591292   12816 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:58:50.591292   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 16:58:50.591292   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 16:58:50.591900   12816 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 16:58:50.591900   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 16:58:50.601073   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 16:58:50.618807   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 16:58:50.671301   12816 start.go:296] duration metric: took 4.3877068s for postStartSetup
	I0416 16:58:50.673161   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:52.621684   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:52.622252   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:52.622252   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:54.923435   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:54.923435   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:54.923763   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:58:54.926483   12816 start.go:128] duration metric: took 1m53.622481s to createHost
	I0416 16:58:54.926657   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:56.793105   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:56.793105   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:56.793184   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:59.024255   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:59.025184   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:59.029108   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:59.029633   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:59.029730   12816 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 16:58:59.149333   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713286739.315259098
	
	I0416 16:58:59.149333   12816 fix.go:216] guest clock: 1713286739.315259098
	I0416 16:58:59.149333   12816 fix.go:229] Guest: 2024-04-16 16:58:59.315259098 +0000 UTC Remote: 2024-04-16 16:58:54.9265716 +0000 UTC m=+304.925199701 (delta=4.388687498s)
	I0416 16:58:59.149333   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:01.054656   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:01.054831   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:01.054831   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:03.303195   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:03.303195   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:03.307071   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:59:03.307459   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:59:03.307531   12816 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713286739
	I0416 16:59:03.449024   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 16:58:59 UTC 2024
	
	I0416 16:59:03.449024   12816 fix.go:236] clock set: Tue Apr 16 16:58:59 UTC 2024
	 (err=<nil>)
	I0416 16:59:03.449024   12816 start.go:83] releasing machines lock for "ha-022600-m02", held for 2m2.1447745s
	I0416 16:59:03.450039   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:05.434998   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:05.434998   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:05.434998   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:07.737918   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:07.737918   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:07.739042   12816 out.go:177] * Found network options:
	I0416 16:59:07.739784   12816 out.go:177]   - NO_PROXY=172.19.81.207
	W0416 16:59:07.740404   12816 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 16:59:07.741027   12816 out.go:177]   - NO_PROXY=172.19.81.207
	W0416 16:59:07.741505   12816 proxy.go:119] fail to check proxy env: Error ip not in block
	W0416 16:59:07.742708   12816 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 16:59:07.744988   12816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:59:07.745153   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:07.752817   12816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0416 16:59:07.752817   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:09.758953   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:12.157582   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:12.158536   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:12.159044   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:59:12.184719   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:12.184719   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:12.185179   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:59:12.257436   12816 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5043642s)
	W0416 16:59:12.257436   12816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:59:12.266545   12816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:59:12.367206   12816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 16:59:12.367296   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:59:12.367330   12816 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6219642s)
	I0416 16:59:12.367330   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:59:12.423201   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 16:59:12.453988   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 16:59:12.472992   12816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 16:59:12.482991   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 16:59:12.510864   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:59:12.538866   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 16:59:12.565866   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:59:12.597751   12816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:59:12.622761   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 16:59:12.648905   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 16:59:12.674904   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 16:59:12.713452   12816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:59:12.741495   12816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:59:12.768497   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:59:12.975524   12816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 16:59:13.011635   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:59:13.023647   12816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 16:59:13.058146   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:59:13.091991   12816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 16:59:13.139058   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:59:13.173081   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:59:13.208242   12816 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 16:59:13.259511   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:59:13.282094   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:59:13.329081   12816 ssh_runner.go:195] Run: which cri-dockerd
	I0416 16:59:13.344832   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 16:59:13.362131   12816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 16:59:13.403377   12816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 16:59:13.597444   12816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 16:59:13.768147   12816 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 16:59:13.768278   12816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 16:59:13.808294   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:59:13.987216   12816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 17:00:15.104612   12816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1138396s)
	I0416 17:00:15.115049   12816 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0416 17:00:15.145752   12816 out.go:177] 
	W0416 17:00:15.146473   12816 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 16 16:58:45 ha-022600-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.076842920Z" level=info msg="Starting up"
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.077687177Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.078706068Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.109665355Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138411128Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138448735Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138508447Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138523049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138600164Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138632670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138848110Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138955930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139030244Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139045347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139142365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139433520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142495192Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142588309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142778845Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142795748Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143044695Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143174419Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143191422Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.152862930Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153144583Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153313214Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153337519Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153354522Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153467543Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153957434Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154159572Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154195179Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154212082Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154230586Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154258491Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154272393Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154287696Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154303599Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154317302Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154330504Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154344107Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154373612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154392516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154406618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154421121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154434024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154447526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154460128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154474031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154498536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154514539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154525841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154555046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154568249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154583952Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154604755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154629960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154642062Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154700973Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154916114Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155014532Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155030135Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155203567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155302486Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155325090Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155706861Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155796078Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155907599Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155947306Z" level=info msg="containerd successfully booted in 0.047582s"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.119001526Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.129323458Z" level=info msg="Loading containers: start."
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.358382320Z" level=info msg="Loading containers: done."
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.377033580Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.377149301Z" level=info msg="Daemon has completed initialization"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.447556885Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 16:58:46 ha-022600-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.449134569Z" level=info msg="API listen on [::]:2376"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.178053148Z" level=info msg="Processing signal 'terminated'"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.179830517Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 16:59:14 ha-022600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.180814055Z" level=info msg="Daemon shutdown complete"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.181020363Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.181054564Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 16:59:15 ha-022600-m02 dockerd[1019]: time="2024-04-16T16:59:15.248212596Z" level=info msg="Starting up"
	Apr 16 17:00:15 ha-022600-m02 dockerd[1019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0416 17:00:15.146611   12816 out.go:239] * 
	W0416 17:00:15.147806   12816 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 17:00:15.148383   12816 out.go:177] 
	
	
	==> Docker <==
	Apr 16 17:13:47 ha-022600 dockerd[1325]: 2024/04/16 17:13:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:47 ha-022600 dockerd[1325]: 2024/04/16 17:13:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:47 ha-022600 dockerd[1325]: 2024/04/16 17:13:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:47 ha-022600 dockerd[1325]: 2024/04/16 17:13:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:47 ha-022600 dockerd[1325]: 2024/04/16 17:13:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:47 ha-022600 dockerd[1325]: 2024/04/16 17:13:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:47 ha-022600 dockerd[1325]: 2024/04/16 17:13:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:47 ha-022600 dockerd[1325]: 2024/04/16 17:13:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:13:47 ha-022600 dockerd[1325]: 2024/04/16 17:13:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:17:46 ha-022600 dockerd[1325]: 2024/04/16 17:17:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:17:46 ha-022600 dockerd[1325]: 2024/04/16 17:17:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:17:47 ha-022600 dockerd[1325]: 2024/04/16 17:17:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:17:47 ha-022600 dockerd[1325]: 2024/04/16 17:17:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:17:47 ha-022600 dockerd[1325]: 2024/04/16 17:17:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:17:47 ha-022600 dockerd[1325]: 2024/04/16 17:17:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:17:47 ha-022600 dockerd[1325]: 2024/04/16 17:17:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:17:47 ha-022600 dockerd[1325]: 2024/04/16 17:17:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:18:34 ha-022600 dockerd[1325]: 2024/04/16 17:18:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:18:34 ha-022600 dockerd[1325]: 2024/04/16 17:18:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:18:34 ha-022600 dockerd[1325]: 2024/04/16 17:18:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:18:34 ha-022600 dockerd[1325]: 2024/04/16 17:18:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:18:34 ha-022600 dockerd[1325]: 2024/04/16 17:18:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:18:34 ha-022600 dockerd[1325]: 2024/04/16 17:18:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:18:34 ha-022600 dockerd[1325]: 2024/04/16 17:18:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:18:34 ha-022600 dockerd[1325]: 2024/04/16 17:18:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d38b1a5f4caa8       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Running             busybox                   0                   8a4de3aa24af1       busybox-7fdf7869d9-rpfpf
	3fe545bfad4e6       cbb01a7bd410d                                                                                         22 minutes ago      Running             coredns                   0                   093278b3840ef       coredns-76f75df574-qm89x
	979dee88be2b4       cbb01a7bd410d                                                                                         22 minutes ago      Running             coredns                   0                   4ad38b0d59335       coredns-76f75df574-ww2r6
	257879ecf06b2       6e38f40d628db                                                                                         22 minutes ago      Running             storage-provisioner       0                   bf991c3e34e2d       storage-provisioner
	be245de9ef545       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              22 minutes ago      Running             kindnet-cni               0                   92c35b3fd0967       kindnet-mwqvl
	05db92f49e0df       a1d263b5dc5b0                                                                                         22 minutes ago      Running             kube-proxy                0                   12380f49c1509       kube-proxy-2vddt
	d1ba82cd26254       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     23 minutes ago      Running             kube-vip                  0                   fa2c75c4c8d59       kube-vip-ha-022600
	a7fb69539df62       6052a25da3f97                                                                                         23 minutes ago      Running             kube-controller-manager   0                   b536621e20d4b       kube-controller-manager-ha-022600
	4fd5df8c9fd37       39f995c9f1996                                                                                         23 minutes ago      Running             kube-apiserver            0                   5a7a1e80caeb4       kube-apiserver-ha-022600
	e042d71e8b0e8       8c390d98f50c0                                                                                         23 minutes ago      Running             kube-scheduler            0                   5a2551c91a1b6       kube-scheduler-ha-022600
	c29b0762ff0bf       3861cfcd7c04c                                                                                         23 minutes ago      Running             etcd                      0                   c8a9aa3126cf5       etcd-ha-022600
	
	
	==> coredns [3fe545bfad4e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = db872c9fdf31f8d8ff61123f2a1e38a38b951fa043b9e42cdb76f01d23889e560885a7bdef735e757fd28e65f13e44b1d5d7b5def31861f6a98cd0279fbc18c8
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:55981 - 4765 "HINFO IN 3735046377920793891.8143170502200932773. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.058936595s
	[INFO] 10.244.0.4:43350 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000388921s
	[INFO] 10.244.0.4:35317 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.052221997s
	[INFO] 10.244.0.4:52074 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.040069369s
	[INFO] 10.244.0.4:49068 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.053312593s
	[INFO] 10.244.0.4:54711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123507s
	[INFO] 10.244.0.4:44694 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037006811s
	[INFO] 10.244.0.4:33399 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124606s
	[INFO] 10.244.0.4:37329 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000241612s
	[INFO] 10.244.0.4:57333 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131407s
	[INFO] 10.244.0.4:38806 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060403s
	[INFO] 10.244.0.4:48948 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000263914s
	[INFO] 10.244.0.4:51825 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000177309s
	[INFO] 10.244.0.4:53272 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00018301s
	
	
	==> coredns [979dee88be2b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = db872c9fdf31f8d8ff61123f2a1e38a38b951fa043b9e42cdb76f01d23889e560885a7bdef735e757fd28e65f13e44b1d5d7b5def31861f6a98cd0279fbc18c8
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50127 - 24072 "HINFO IN 7665836187497317301.2285362183679153792. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027543487s
	[INFO] 10.244.0.4:34822 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000224011s
	[INFO] 10.244.0.4:48911 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000349218s
	[INFO] 10.244.0.4:43369 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.023699624s
	[INFO] 10.244.0.4:56309 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000258914s
	[INFO] 10.244.0.4:36791 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.003463479s
	[INFO] 10.244.0.4:55996 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000301816s
	[INFO] 10.244.0.4:35967 - 5 "PTR IN 1.80.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000116506s
	
	
	==> describe nodes <==
	Name:               ha-022600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-022600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=ha-022600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T16_56_39_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 16:56:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-022600
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 17:19:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 17:16:34 +0000   Tue, 16 Apr 2024 16:56:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 17:16:34 +0000   Tue, 16 Apr 2024 16:56:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 17:16:34 +0000   Tue, 16 Apr 2024 16:56:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 17:16:34 +0000   Tue, 16 Apr 2024 16:57:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.81.207
	  Hostname:    ha-022600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b4674338fa494bbcb2e21e2b4385c5e1
	  System UUID:                201025fc-0c03-cc49-a194-29d6500971a2
	  Boot ID:                    6ae5bedd-6e8e-4f58-b08c-8e9912fd04de
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-rpfpf             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-76f75df574-qm89x             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 coredns-76f75df574-ww2r6             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-ha-022600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kindnet-mwqvl                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      22m
	  kube-system                 kube-apiserver-ha-022600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-ha-022600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-2vddt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-ha-022600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-vip-ha-022600                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 22m   kube-proxy       
	  Normal  Starting                 22m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m   kubelet          Node ha-022600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m   kubelet          Node ha-022600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m   kubelet          Node ha-022600 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           22m   node-controller  Node ha-022600 event: Registered Node ha-022600 in Controller
	  Normal  NodeReady                22m   kubelet          Node ha-022600 status is now: NodeReady
	
	
	Name:               ha-022600-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-022600-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=ha-022600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_16T17_16_38_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 17:16:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-022600-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 17:19:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 17:17:08 +0000   Tue, 16 Apr 2024 17:16:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 17:17:08 +0000   Tue, 16 Apr 2024 17:16:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 17:17:08 +0000   Tue, 16 Apr 2024 17:16:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 17:17:08 +0000   Tue, 16 Apr 2024 17:16:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.93.94
	  Hostname:    ha-022600-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 cefa169b716045589e59382d0939ad48
	  System UUID:                25782c5b-4e02-0547-b063-db6b9c5f1f5b
	  Boot ID:                    e7c67d41-aa2d-47a1-952b-fa7ff5422e05
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7c2px       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m59s
	  kube-system                 kube-proxy-ss5lp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m48s                  kube-proxy       
	  Normal  Starting                 2m59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m59s (x2 over 2m59s)  kubelet          Node ha-022600-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m59s (x2 over 2m59s)  kubelet          Node ha-022600-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m59s (x2 over 2m59s)  kubelet          Node ha-022600-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m55s                  node-controller  Node ha-022600-m03 event: Registered Node ha-022600-m03 in Controller
	  Normal  NodeReady                2m42s                  kubelet          Node ha-022600-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.656516] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr16 16:55] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.165290] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[Apr16 16:56] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.091843] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.493988] systemd-fstab-generator[987]: Ignoring "noauto" option for root device
	[  +0.172637] systemd-fstab-generator[999]: Ignoring "noauto" option for root device
	[  +0.230010] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	[  +2.695048] systemd-fstab-generator[1185]: Ignoring "noauto" option for root device
	[  +0.219400] systemd-fstab-generator[1197]: Ignoring "noauto" option for root device
	[  +0.196554] systemd-fstab-generator[1209]: Ignoring "noauto" option for root device
	[  +0.267217] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
	[ +11.053282] systemd-fstab-generator[1317]: Ignoring "noauto" option for root device
	[  +0.095458] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.012264] systemd-fstab-generator[1521]: Ignoring "noauto" option for root device
	[  +6.758798] systemd-fstab-generator[1720]: Ignoring "noauto" option for root device
	[  +0.093227] kauditd_printk_skb: 73 callbacks suppressed
	[  +7.850543] systemd-fstab-generator[2213]: Ignoring "noauto" option for root device
	[  +0.130310] kauditd_printk_skb: 72 callbacks suppressed
	[ +15.381320] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.386371] kauditd_printk_skb: 29 callbacks suppressed
	[Apr16 17:00] hrtimer: interrupt took 5042261 ns
	[  +0.908827] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [c29b0762ff0b] <==
	{"level":"info","ts":"2024-04-16T17:06:33.350784Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":964}
	{"level":"info","ts":"2024-04-16T17:06:33.393755Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":964,"took":"42.49244ms","hash":1730924367,"current-db-size-bytes":2433024,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2433024,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-04-16T17:06:33.395361Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1730924367,"revision":964,"compact-revision":-1}
	{"level":"info","ts":"2024-04-16T17:11:33.360995Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1502}
	{"level":"info","ts":"2024-04-16T17:11:33.366072Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1502,"took":"4.116913ms","hash":127222243,"current-db-size-bytes":2433024,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1818624,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-04-16T17:11:33.366162Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":127222243,"revision":1502,"compact-revision":964}
	{"level":"info","ts":"2024-04-16T17:15:11.421098Z","caller":"traceutil/trace.go:171","msg":"trace[1208553513] transaction","detail":"{read_only:false; response_revision:2431; number_of_response:1; }","duration":"155.410586ms","start":"2024-04-16T17:15:11.265667Z","end":"2024-04-16T17:15:11.421077Z","steps":["trace[1208553513] 'process raft request'  (duration: 155.135072ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:16:31.529032Z","caller":"traceutil/trace.go:171","msg":"trace[505251683] linearizableReadLoop","detail":"{readStateIndex:2832; appliedIndex:2831; }","duration":"107.445309ms","start":"2024-04-16T17:16:31.421572Z","end":"2024-04-16T17:16:31.529017Z","steps":["trace[505251683] 'read index received'  (duration: 107.319103ms)","trace[505251683] 'applied index is now lower than readState.Index'  (duration: 125.606µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-16T17:16:31.529184Z","caller":"traceutil/trace.go:171","msg":"trace[359290184] transaction","detail":"{read_only:false; response_revision:2575; number_of_response:1; }","duration":"197.441024ms","start":"2024-04-16T17:16:31.331735Z","end":"2024-04-16T17:16:31.529176Z","steps":["trace[359290184] 'process raft request'  (duration: 197.196912ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:16:31.529431Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.83703ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1110"}
	{"level":"info","ts":"2024-04-16T17:16:31.52969Z","caller":"traceutil/trace.go:171","msg":"trace[1576069612] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2575; }","duration":"108.130545ms","start":"2024-04-16T17:16:31.421545Z","end":"2024-04-16T17:16:31.529676Z","steps":["trace[1576069612] 'agreement among raft nodes before linearized reading'  (duration: 107.801628ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:16:33.727834Z","caller":"traceutil/trace.go:171","msg":"trace[1449824028] transaction","detail":"{read_only:false; response_revision:2578; number_of_response:1; }","duration":"364.497189ms","start":"2024-04-16T17:16:33.363317Z","end":"2024-04-16T17:16:33.727815Z","steps":["trace[1449824028] 'process raft request'  (duration: 364.339681ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:16:33.729115Z","caller":"traceutil/trace.go:171","msg":"trace[948704194] linearizableReadLoop","detail":"{readStateIndex:2837; appliedIndex:2836; }","duration":"283.56914ms","start":"2024-04-16T17:16:33.445533Z","end":"2024-04-16T17:16:33.729102Z","steps":["trace[948704194] 'read index received'  (duration: 282.906606ms)","trace[948704194] 'applied index is now lower than readState.Index'  (duration: 662.034µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-16T17:16:33.72965Z","caller":"traceutil/trace.go:171","msg":"trace[1908879286] transaction","detail":"{read_only:false; response_revision:2579; number_of_response:1; }","duration":"291.495046ms","start":"2024-04-16T17:16:33.438143Z","end":"2024-04-16T17:16:33.729638Z","steps":["trace[1908879286] 'process raft request'  (duration: 290.677204ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:16:33.729668Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-16T17:16:33.363297Z","time spent":"364.643596ms","remote":"127.0.0.1:49456","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":25,"response count":0,"response size":38,"request content":"compare:<key:\"compact_rev_key\" version:3 > success:<request_put:<key:\"compact_rev_key\" value_size:4 >> failure:<request_range:<key:\"compact_rev_key\" > >"}
	{"level":"warn","ts":"2024-04-16T17:16:33.729962Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.040139ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:434"}
	{"level":"info","ts":"2024-04-16T17:16:33.73064Z","caller":"traceutil/trace.go:171","msg":"trace[1591257630] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:2579; }","duration":"186.677072ms","start":"2024-04-16T17:16:33.543885Z","end":"2024-04-16T17:16:33.730562Z","steps":["trace[1591257630] 'agreement among raft nodes before linearized reading'  (duration: 185.842129ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:16:33.730022Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"284.488987ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-16T17:16:33.731097Z","caller":"traceutil/trace.go:171","msg":"trace[339406949] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2579; }","duration":"285.581443ms","start":"2024-04-16T17:16:33.445505Z","end":"2024-04-16T17:16:33.731087Z","steps":["trace[339406949] 'agreement among raft nodes before linearized reading'  (duration: 284.501387ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:16:33.730066Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.750168ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1110"}
	{"level":"info","ts":"2024-04-16T17:16:33.731323Z","caller":"traceutil/trace.go:171","msg":"trace[1323315847] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2579; }","duration":"143.028733ms","start":"2024-04-16T17:16:33.588284Z","end":"2024-04-16T17:16:33.731313Z","steps":["trace[1323315847] 'agreement among raft nodes before linearized reading'  (duration: 141.746268ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:16:33.740796Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2041}
	{"level":"info","ts":"2024-04-16T17:16:33.745817Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2041,"took":"4.568334ms","hash":1427640317,"current-db-size-bytes":2433024,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1806336,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-04-16T17:16:33.746025Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1427640317,"revision":2041,"compact-revision":1502}
	{"level":"info","ts":"2024-04-16T17:16:40.98492Z","caller":"traceutil/trace.go:171","msg":"trace[2045382782] transaction","detail":"{read_only:false; response_revision:2627; number_of_response:1; }","duration":"150.576419ms","start":"2024-04-16T17:16:40.834317Z","end":"2024-04-16T17:16:40.984893Z","steps":["trace[2045382782] 'process raft request'  (duration: 150.385009ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:19:36 up 24 min,  0 users,  load average: 0.26, 0.29, 0.21
	Linux ha-022600 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [be245de9ef54] <==
	I0416 17:18:31.666429       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:18:41.672452       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:18:41.672492       1 main.go:227] handling current node
	I0416 17:18:41.672503       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:18:41.672509       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:18:51.687001       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:18:51.687091       1 main.go:227] handling current node
	I0416 17:18:51.687103       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:18:51.687110       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:19:01.692513       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:19:01.693031       1 main.go:227] handling current node
	I0416 17:19:01.693225       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:19:01.693312       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:19:11.708587       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:19:11.708699       1 main.go:227] handling current node
	I0416 17:19:11.708714       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:19:11.708828       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:19:21.723525       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:19:21.723646       1 main.go:227] handling current node
	I0416 17:19:21.723661       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:19:21.723669       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:19:31.735111       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:19:31.735216       1 main.go:227] handling current node
	I0416 17:19:31.735230       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:19:31.735238       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [4fd5df8c9fd3] <==
	I0416 16:56:35.510308       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0416 16:56:35.512679       1 aggregator.go:165] initial CRD sync complete...
	I0416 16:56:35.516211       1 autoregister_controller.go:141] Starting autoregister controller
	I0416 16:56:35.516249       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0416 16:56:35.516256       1 cache.go:39] Caches are synced for autoregister controller
	I0416 16:56:35.517473       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0416 16:56:35.522352       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0416 16:56:35.529558       1 controller.go:624] quota admission added evaluator for: namespaces
	I0416 16:56:35.542494       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0416 16:56:36.411016       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0416 16:56:36.418409       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0416 16:56:36.419376       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0416 16:56:37.172553       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0416 16:56:37.235069       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0416 16:56:37.370838       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0416 16:56:37.381797       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.81.207]
	I0416 16:56:37.383264       1 controller.go:624] quota admission added evaluator for: endpoints
	I0416 16:56:37.388718       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0416 16:56:37.435733       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0416 16:56:38.737496       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0416 16:56:38.764389       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0416 16:56:38.781093       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0416 16:56:51.466047       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0416 16:56:51.868826       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	http2: server: error reading preface from client 172.19.93.94:54156: read tcp 172.19.95.254:8443->172.19.93.94:54156: read: connection reset by peer
	
	
	==> kube-controller-manager [a7fb69539df6] <==
	I0416 16:57:04.995404       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="188.309µs"
	I0416 16:57:05.057328       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="96.005µs"
	I0416 16:57:05.964586       1 node_lifecycle_controller.go:1045] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0416 16:57:07.181900       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="167.009µs"
	I0416 16:57:07.224163       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="15.307781ms"
	I0416 16:57:07.224903       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="88.905µs"
	I0416 16:57:07.277301       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="19.898845ms"
	I0416 16:57:07.277810       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="53.303µs"
	I0416 17:00:45.709324       1 event.go:376] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7fdf7869d9 to 3"
	I0416 17:00:45.728545       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-rpfpf"
	I0416 17:00:45.745464       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-mnl84"
	I0416 17:00:45.756444       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-gph6r"
	I0416 17:00:45.770175       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="62.082711ms"
	I0416 17:00:45.784213       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="13.744211ms"
	I0416 17:00:45.810992       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="26.530372ms"
	I0416 17:00:45.811146       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="42.802µs"
	I0416 17:00:48.413892       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="8.465463ms"
	I0416 17:00:48.413981       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="40.302µs"
	I0416 17:16:37.436480       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-022600-m03\" does not exist"
	I0416 17:16:37.446130       1 range_allocator.go:380] "Set node PodCIDR" node="ha-022600-m03" podCIDRs=["10.244.1.0/24"]
	I0416 17:16:37.459239       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7c2px"
	I0416 17:16:37.461522       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ss5lp"
	I0416 17:16:41.186805       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-022600-m03"
	I0416 17:16:41.187824       1 event.go:376] "Event occurred" object="ha-022600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-022600-m03 event: Registered Node ha-022600-m03 in Controller"
	I0416 17:16:54.835196       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-022600-m03"
	
	
	==> kube-proxy [05db92f49e0d] <==
	I0416 16:56:54.468581       1 server_others.go:72] "Using iptables proxy"
	I0416 16:56:54.505964       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.81.207"]
	I0416 16:56:54.583838       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 16:56:54.584172       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 16:56:54.584273       1 server_others.go:168] "Using iptables Proxier"
	I0416 16:56:54.590060       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 16:56:54.590806       1 server.go:865] "Version info" version="v1.29.3"
	I0416 16:56:54.591014       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 16:56:54.592331       1 config.go:188] "Starting service config controller"
	I0416 16:56:54.592517       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 16:56:54.592625       1 config.go:97] "Starting endpoint slice config controller"
	I0416 16:56:54.592689       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 16:56:54.594058       1 config.go:315] "Starting node config controller"
	I0416 16:56:54.594215       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 16:56:54.693900       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 16:56:54.693964       1 shared_informer.go:318] Caches are synced for service config
	I0416 16:56:54.694328       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [e042d71e8b0e] <==
	W0416 16:56:36.501819       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0416 16:56:36.501922       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0416 16:56:36.507709       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0416 16:56:36.507948       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0416 16:56:36.573671       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 16:56:36.573877       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 16:56:36.602162       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 16:56:36.602205       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 16:56:36.621966       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.622272       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0416 16:56:36.648392       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0416 16:56:36.648623       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0416 16:56:36.694872       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 16:56:36.694970       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 16:56:36.804118       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.805424       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 16:56:36.821863       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.822231       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 16:56:36.866017       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 16:56:36.866298       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0416 16:56:36.904820       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 16:56:36.905097       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 16:56:36.917996       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.918036       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0416 16:56:39.298679       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 17:14:38 ha-022600 kubelet[2220]: E0416 17:14:38.994207    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:14:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:14:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:14:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:14:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:15:38 ha-022600 kubelet[2220]: E0416 17:15:38.994251    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:15:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:15:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:15:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:15:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:16:38 ha-022600 kubelet[2220]: E0416 17:16:38.994203    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:16:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:16:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:16:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:16:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:17:38 ha-022600 kubelet[2220]: E0416 17:17:38.995310    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:17:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:17:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:17:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:17:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:18:38 ha-022600 kubelet[2220]: E0416 17:18:38.994865    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:18:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:18:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:18:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:18:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 17:19:29.150642   10012 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-022600 -n ha-022600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-022600 -n ha-022600: (10.8141987s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-022600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7fdf7869d9-gph6r busybox-7fdf7869d9-mnl84
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/CopyFile]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-022600 describe pod busybox-7fdf7869d9-gph6r busybox-7fdf7869d9-mnl84
helpers_test.go:282: (dbg) kubectl --context ha-022600 describe pod busybox-7fdf7869d9-gph6r busybox-7fdf7869d9-mnl84:

                                                
                                                
-- stdout --
	Name:             busybox-7fdf7869d9-gph6r
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7fdf7869d9
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7fdf7869d9
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h29q5 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-h29q5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  3m38s (x4 over 19m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	
	
	Name:             busybox-7fdf7869d9-mnl84
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7fdf7869d9
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7fdf7869d9
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xhwqb (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-xhwqb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  3m38s (x4 over 19m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/CopyFile (61.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (93.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-022600 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p ha-022600 node stop m02 -v=7 --alsologtostderr: (40.4259536s)
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-022600 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-022600 status -v=7 --alsologtostderr: exit status 7 (23.174857s)

                                                
                                                
-- stdout --
	ha-022600
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-022600-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-022600-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 17:20:28.623638    9796 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0416 17:20:28.677535    9796 out.go:291] Setting OutFile to fd 880 ...
	I0416 17:20:28.678294    9796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:20:28.678294    9796 out.go:304] Setting ErrFile to fd 1012...
	I0416 17:20:28.678294    9796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:20:28.693756    9796 out.go:298] Setting JSON to false
	I0416 17:20:28.693825    9796 mustload.go:65] Loading cluster: ha-022600
	I0416 17:20:28.693959    9796 notify.go:220] Checking for updates...
	I0416 17:20:28.693959    9796 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 17:20:28.694487    9796 status.go:255] checking status of ha-022600 ...
	I0416 17:20:28.695150    9796 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 17:20:30.625938    9796 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:20:30.625938    9796 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:20:30.626506    9796 status.go:330] ha-022600 host status = "Running" (err=<nil>)
	I0416 17:20:30.626506    9796 host.go:66] Checking if "ha-022600" exists ...
	I0416 17:20:30.626709    9796 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 17:20:32.587590    9796 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:20:32.588431    9796 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:20:32.588479    9796 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 17:20:34.896039    9796 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 17:20:34.897052    9796 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:20:34.897052    9796 host.go:66] Checking if "ha-022600" exists ...
	I0416 17:20:34.905260    9796 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 17:20:34.906268    9796 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 17:20:36.788768    9796 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:20:36.788768    9796 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:20:36.789371    9796 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 17:20:39.084383    9796 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 17:20:39.084383    9796 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:20:39.085408    9796 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 17:20:39.192550    9796 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.2870474s)
	I0416 17:20:39.201698    9796 ssh_runner.go:195] Run: systemctl --version
	I0416 17:20:39.222856    9796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 17:20:39.248369    9796 kubeconfig.go:125] found "ha-022600" server: "https://172.19.95.254:8443"
	I0416 17:20:39.248369    9796 api_server.go:166] Checking apiserver status ...
	I0416 17:20:39.257069    9796 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:20:39.297285    9796 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2100/cgroup
	W0416 17:20:39.315399    9796 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2100/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 17:20:39.326250    9796 ssh_runner.go:195] Run: ls
	I0416 17:20:39.332827    9796 api_server.go:253] Checking apiserver healthz at https://172.19.95.254:8443/healthz ...
	I0416 17:20:39.342205    9796 api_server.go:279] https://172.19.95.254:8443/healthz returned 200:
	ok
	I0416 17:20:39.342205    9796 status.go:422] ha-022600 apiserver status = Running (err=<nil>)
	I0416 17:20:39.342205    9796 status.go:257] ha-022600 status: &{Name:ha-022600 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 17:20:39.342205    9796 status.go:255] checking status of ha-022600-m02 ...
	I0416 17:20:39.342723    9796 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 17:20:41.215288    9796 main.go:141] libmachine: [stdout =====>] : Off
	
	I0416 17:20:41.215371    9796 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:20:41.215371    9796 status.go:330] ha-022600-m02 host status = "Stopped" (err=<nil>)
	I0416 17:20:41.215371    9796 status.go:343] host is not running, skipping remaining checks
	I0416 17:20:41.215442    9796 status.go:257] ha-022600-m02 status: &{Name:ha-022600-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 17:20:41.215442    9796 status.go:255] checking status of ha-022600-m03 ...
	I0416 17:20:41.216084    9796 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m03 ).state
	I0416 17:20:43.136203    9796 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:20:43.136203    9796 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:20:43.137142    9796 status.go:330] ha-022600-m03 host status = "Running" (err=<nil>)
	I0416 17:20:43.137223    9796 host.go:66] Checking if "ha-022600-m03" exists ...
	I0416 17:20:43.137393    9796 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m03 ).state
	I0416 17:20:45.034721    9796 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:20:45.034721    9796 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:20:45.034721    9796 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 17:20:47.333351    9796 main.go:141] libmachine: [stdout =====>] : 172.19.93.94
	
	I0416 17:20:47.333743    9796 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:20:47.333831    9796 host.go:66] Checking if "ha-022600-m03" exists ...
	I0416 17:20:47.342461    9796 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 17:20:47.342461    9796 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m03 ).state
	I0416 17:20:49.259425    9796 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:20:49.260171    9796 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:20:49.260171    9796 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 17:20:51.529891    9796 main.go:141] libmachine: [stdout =====>] : 172.19.93.94
	
	I0416 17:20:51.529891    9796 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:20:51.529891    9796 sshutil.go:53] new ssh client: &{IP:172.19.93.94 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m03\id_rsa Username:docker}
	I0416 17:20:51.630088    9796 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.2873835s)
	I0416 17:20:51.639721    9796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 17:20:51.669599    9796 status.go:257] ha-022600-m03 status: &{Name:ha-022600-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-windows-amd64.exe -p ha-022600 status -v=7 --alsologtostderr": ha-022600
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-022600-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-022600-m03
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-windows-amd64.exe -p ha-022600 status -v=7 --alsologtostderr": ha-022600
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-022600-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-022600-m03
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-windows-amd64.exe -p ha-022600 status -v=7 --alsologtostderr": ha-022600
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-022600-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-022600-m03
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-windows-amd64.exe -p ha-022600 status -v=7 --alsologtostderr": ha-022600
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-022600-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-022600-m03
type: Worker
host: Running
kubelet: Running

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-022600 -n ha-022600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-022600 -n ha-022600: (10.8412152s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-022600 logs -n 25
E0416 17:21:06.883348    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-022600 logs -n 25: (7.4576214s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:10 UTC | 16 Apr 24 17:10 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:10 UTC | 16 Apr 24 17:10 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:11 UTC | 16 Apr 24 17:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:11 UTC | 16 Apr 24 17:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:11 UTC | 16 Apr 24 17:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:11 UTC | 16 Apr 24 17:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-gph6r --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-mnl84 --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | busybox-7fdf7869d9-rpfpf --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-gph6r --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-mnl84 --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | busybox-7fdf7869d9-rpfpf --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-gph6r -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-mnl84 -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | busybox-7fdf7869d9-rpfpf -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC | 16 Apr 24 17:13 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC |                     |
	|         | busybox-7fdf7869d9-gph6r             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC |                     |
	|         | busybox-7fdf7869d9-mnl84             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC | 16 Apr 24 17:13 UTC |
	|         | busybox-7fdf7869d9-rpfpf             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC |                     |
	|         | busybox-7fdf7869d9-rpfpf -- sh       |           |                   |                |                     |                     |
	|         | -c ping -c 1 172.19.80.1             |           |                   |                |                     |                     |
	| node    | add -p ha-022600 -v=7                | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC | 16 Apr 24 17:16 UTC |
	|         | --alsologtostderr                    |           |                   |                |                     |                     |
	| node    | ha-022600 node stop m02 -v=7         | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:19 UTC | 16 Apr 24 17:20 UTC |
	|         | --alsologtostderr                    |           |                   |                |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 16:53:50
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 16:53:50.116950   12816 out.go:291] Setting OutFile to fd 784 ...
	I0416 16:53:50.117952   12816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:53:50.117952   12816 out.go:304] Setting ErrFile to fd 696...
	I0416 16:53:50.117952   12816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:53:50.138920   12816 out.go:298] Setting JSON to false
	I0416 16:53:50.141501   12816 start.go:129] hostinfo: {"hostname":"minikube5","uptime":24059,"bootTime":1713262370,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0416 16:53:50.141501   12816 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0416 16:53:50.143700   12816 out.go:177] * [ha-022600] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0416 16:53:50.144387   12816 notify.go:220] Checking for updates...
	I0416 16:53:50.144982   12816 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:53:50.145881   12816 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 16:53:50.146373   12816 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0416 16:53:50.146987   12816 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 16:53:50.147788   12816 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 16:53:50.149250   12816 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 16:53:54.959514   12816 out.go:177] * Using the hyperv driver based on user configuration
	I0416 16:53:54.959811   12816 start.go:297] selected driver: hyperv
	I0416 16:53:54.959811   12816 start.go:901] validating driver "hyperv" against <nil>
	I0416 16:53:54.959811   12816 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 16:53:55.002641   12816 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 16:53:55.003374   12816 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 16:53:55.003816   12816 cni.go:84] Creating CNI manager for ""
	I0416 16:53:55.003816   12816 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0416 16:53:55.003816   12816 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0416 16:53:55.003816   12816 start.go:340] cluster config:
	{Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:53:55.003816   12816 iso.go:125] acquiring lock: {Name:mka3f8eef32f5becd06d05d1d837c2a92a8fa70c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 16:53:55.005700   12816 out.go:177] * Starting "ha-022600" primary control-plane node in "ha-022600" cluster
	I0416 16:53:55.006053   12816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:53:55.006397   12816 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0416 16:53:55.006397   12816 cache.go:56] Caching tarball of preloaded images
	I0416 16:53:55.006539   12816 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 16:53:55.006809   12816 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 16:53:55.007075   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:53:55.007821   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json: {Name:mkc2f9747189bfa0db5ea21e93e1afafc0e89eb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:53:55.008149   12816 start.go:360] acquireMachinesLock for ha-022600: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:53:55.009151   12816 start.go:364] duration metric: took 1.0024ms to acquireMachinesLock for "ha-022600"
	I0416 16:53:55.009151   12816 start.go:93] Provisioning new machine with config: &{Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 16:53:55.009151   12816 start.go:125] createHost starting for "" (driver="hyperv")
	I0416 16:53:55.010175   12816 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 16:53:55.010397   12816 start.go:159] libmachine.API.Create for "ha-022600" (driver="hyperv")
	I0416 16:53:55.010397   12816 client.go:168] LocalClient.Create starting
	I0416 16:53:55.010740   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0416 16:53:55.011023   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:53:55.011023   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:53:55.011200   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0416 16:53:55.011403   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:53:55.011403   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:53:55.011541   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0416 16:53:56.852843   12816 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0416 16:53:56.852843   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:53:56.853713   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0416 16:53:58.346838   12816 main.go:141] libmachine: [stdout =====>] : False
	
	I0416 16:53:58.346838   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:53:58.347399   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:53:59.667129   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:53:59.667129   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:53:59.667644   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:54:02.789332   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:54:02.789332   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:02.791736   12816 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 16:54:03.131710   12816 main.go:141] libmachine: Creating SSH key...
	I0416 16:54:03.273248   12816 main.go:141] libmachine: Creating VM...
	I0416 16:54:03.273248   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:54:05.824835   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:54:05.824835   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:05.824937   12816 main.go:141] libmachine: Using switch "Default Switch"
	I0416 16:54:05.825022   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:54:07.398351   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:54:07.398635   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:07.398635   12816 main.go:141] libmachine: Creating VHD
	I0416 16:54:07.398733   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0416 16:54:10.982944   12816 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : E9EB5342-E929-43B6-8B97-D7BDD354CEE1
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0416 16:54:10.983213   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:10.983213   12816 main.go:141] libmachine: Writing magic tar header
	I0416 16:54:10.983213   12816 main.go:141] libmachine: Writing SSH key tar header
	I0416 16:54:10.992883   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0416 16:54:13.950584   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:13.950584   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:13.950584   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\disk.vhd' -SizeBytes 20000MB
	I0416 16:54:16.287736   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:16.287736   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:16.287736   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-022600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0416 16:54:19.439740   12816 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-022600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0416 16:54:19.439740   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:19.439740   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-022600 -DynamicMemoryEnabled $false
	I0416 16:54:21.396684   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:21.396684   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:21.397696   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-022600 -Count 2
	I0416 16:54:23.301369   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:23.301369   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:23.302296   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-022600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\boot2docker.iso'
	I0416 16:54:25.540957   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:25.540957   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:25.541060   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-022600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\disk.vhd'
	I0416 16:54:27.919093   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:27.919302   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:27.919302   12816 main.go:141] libmachine: Starting VM...
	I0416 16:54:27.919462   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-022600
	I0416 16:54:30.480037   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:30.480279   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:30.480279   12816 main.go:141] libmachine: Waiting for host to start...
	I0416 16:54:30.480279   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:32.483346   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:32.484152   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:32.484152   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:34.785082   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:34.785271   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:35.799483   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:37.788691   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:37.788691   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:37.788898   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:40.058231   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:40.058742   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:41.064074   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:43.063862   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:43.063862   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:43.063862   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:45.301253   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:45.301420   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:46.309647   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:48.337653   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:48.337653   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:48.337653   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:50.614494   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:50.615195   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:51.620909   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:53.639317   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:53.640351   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:53.640405   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:55.942630   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:54:55.943393   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:55.943471   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:57.836545   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:57.836545   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:57.837395   12816 machine.go:94] provisionDockerMachine start ...
	I0416 16:54:57.837474   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:59.762683   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:59.763360   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:59.763440   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:02.003751   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:02.003751   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:02.010689   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:02.023158   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:02.023158   12816 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 16:55:02.152140   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 16:55:02.152244   12816 buildroot.go:166] provisioning hostname "ha-022600"
	I0416 16:55:02.152322   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:03.956913   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:03.956913   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:03.957618   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:06.305236   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:06.305236   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:06.309822   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:06.310484   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:06.310484   12816 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-022600 && echo "ha-022600" | sudo tee /etc/hostname
	I0416 16:55:06.479074   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-022600
	
	I0416 16:55:06.479182   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:08.433073   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:08.433999   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:08.433999   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:10.792893   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:10.792893   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:10.796713   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:10.797321   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:10.797321   12816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-022600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-022600/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-022600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:55:10.944702   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:55:10.944870   12816 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 16:55:10.944983   12816 buildroot.go:174] setting up certificates
	I0416 16:55:10.944983   12816 provision.go:84] configureAuth start
	I0416 16:55:10.945092   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:12.932736   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:12.932736   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:12.933614   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:15.203758   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:15.203758   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:15.203758   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:17.088226   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:17.088226   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:17.088334   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:19.325791   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:19.326294   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:19.326294   12816 provision.go:143] copyHostCerts
	I0416 16:55:19.326294   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 16:55:19.326294   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 16:55:19.326294   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 16:55:19.326900   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 16:55:19.328097   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 16:55:19.328097   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 16:55:19.328097   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 16:55:19.328097   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 16:55:19.329417   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 16:55:19.329417   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 16:55:19.329417   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 16:55:19.330063   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 16:55:19.330726   12816 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-022600 san=[127.0.0.1 172.19.81.207 ha-022600 localhost minikube]
	I0416 16:55:19.539117   12816 provision.go:177] copyRemoteCerts
	I0416 16:55:19.547114   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:55:19.547114   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:21.440985   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:21.440985   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:21.440985   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:23.726564   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:23.726564   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:23.727019   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:55:23.834423   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.287066s)
	I0416 16:55:23.834577   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 16:55:23.835008   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 16:55:23.874966   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 16:55:23.875470   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0416 16:55:23.923921   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 16:55:23.923921   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 16:55:23.965042   12816 provision.go:87] duration metric: took 13.0192422s to configureAuth
	I0416 16:55:23.965042   12816 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:55:23.965741   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:55:23.965827   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:25.905339   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:25.905903   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:25.905986   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:28.170079   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:28.170419   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:28.173356   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:28.173937   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:28.173937   12816 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 16:55:28.301727   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 16:55:28.301727   12816 buildroot.go:70] root file system type: tmpfs
	I0416 16:55:28.302425   12816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 16:55:28.302506   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:30.181808   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:30.181808   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:30.181889   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:32.394860   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:32.394860   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:32.398667   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:32.399299   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:32.399475   12816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 16:55:32.556658   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 16:55:32.556887   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:34.446928   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:34.446969   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:34.447053   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:36.709442   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:36.710242   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:36.714111   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:36.714437   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:36.714437   12816 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 16:55:38.655929   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 16:55:38.655929   12816 machine.go:97] duration metric: took 40.8162201s to provisionDockerMachine
	I0416 16:55:38.656036   12816 client.go:171] duration metric: took 1m43.6397622s to LocalClient.Create
	I0416 16:55:38.656036   12816 start.go:167] duration metric: took 1m43.6397622s to libmachine.API.Create "ha-022600"
	I0416 16:55:38.656036   12816 start.go:293] postStartSetup for "ha-022600" (driver="hyperv")
	I0416 16:55:38.656036   12816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:55:38.665072   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:55:38.665072   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:40.514910   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:40.514910   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:40.515910   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:42.764754   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:42.765404   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:42.765404   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:55:42.879399   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2140881s)
	I0416 16:55:42.892410   12816 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:55:42.899117   12816 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:55:42.899117   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 16:55:42.899734   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 16:55:42.901086   12816 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 16:55:42.901154   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 16:55:42.911237   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 16:55:42.927664   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 16:55:42.975440   12816 start.go:296] duration metric: took 4.3191592s for postStartSetup
	I0416 16:55:42.977201   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:44.830945   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:44.830945   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:44.831562   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:47.134349   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:47.134788   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:47.135000   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:55:47.137270   12816 start.go:128] duration metric: took 1m52.1217609s to createHost
	I0416 16:55:47.137270   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:49.024055   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:49.024055   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:49.024657   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:51.238446   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:51.238526   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:51.242455   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:51.243052   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:51.243052   12816 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 16:55:51.369469   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713286551.540248133
	
	I0416 16:55:51.369575   12816 fix.go:216] guest clock: 1713286551.540248133
	I0416 16:55:51.369575   12816 fix.go:229] Guest: 2024-04-16 16:55:51.540248133 +0000 UTC Remote: 2024-04-16 16:55:47.1372703 +0000 UTC m=+117.146546101 (delta=4.402977833s)
	I0416 16:55:51.369790   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:53.407581   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:53.407727   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:53.407727   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:55.663769   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:55.663769   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:55.667543   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:55.667688   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:55.667688   12816 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713286551
	I0416 16:55:55.810591   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 16:55:51 UTC 2024
	
	I0416 16:55:55.810700   12816 fix.go:236] clock set: Tue Apr 16 16:55:51 UTC 2024
	 (err=<nil>)
	I0416 16:55:55.810700   12816 start.go:83] releasing machines lock for "ha-022600", held for 2m0.7946995s
	I0416 16:55:55.810965   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:57.710878   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:57.710878   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:57.711672   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:59.985139   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:59.985210   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:59.988730   12816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:55:59.988803   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:59.998550   12816 ssh_runner.go:195] Run: cat /version.json
	I0416 16:55:59.998550   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:01.993954   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:01.993954   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:01.993954   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:01.995788   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:01.995959   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:01.996084   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:04.379274   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:56:04.379356   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:04.379701   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:56:04.391360   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:56:04.392161   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:04.392520   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:56:04.469159   12816 ssh_runner.go:235] Completed: cat /version.json: (4.4703555s)
	I0416 16:56:04.479363   12816 ssh_runner.go:195] Run: systemctl --version
	I0416 16:56:04.584079   12816 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5950892s)
	I0416 16:56:04.593130   12816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 16:56:04.602217   12816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:56:04.610705   12816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:56:04.639084   12816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 16:56:04.639119   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:56:04.639119   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:56:04.684127   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 16:56:04.713899   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 16:56:04.734297   12816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 16:56:04.745020   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 16:56:04.776657   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:56:04.806087   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 16:56:04.854166   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:56:04.890388   12816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:56:04.918140   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 16:56:04.946595   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 16:56:04.975408   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 16:56:05.001633   12816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:56:05.028505   12816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:56:05.053299   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:05.230466   12816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 16:56:05.260161   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:56:05.269988   12816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 16:56:05.302694   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:56:05.335619   12816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 16:56:05.368663   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:56:05.402792   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:56:05.435612   12816 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 16:56:05.483431   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:56:05.505797   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:56:05.548843   12816 ssh_runner.go:195] Run: which cri-dockerd
	I0416 16:56:05.563980   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 16:56:05.582552   12816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 16:56:05.624048   12816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 16:56:05.804495   12816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 16:56:05.984936   12816 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 16:56:05.985183   12816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 16:56:06.032244   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:06.217075   12816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 16:56:08.662995   12816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4457805s)
	I0416 16:56:08.670977   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0416 16:56:08.701542   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 16:56:08.730698   12816 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0416 16:56:08.941813   12816 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0416 16:56:09.145939   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:09.331138   12816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0416 16:56:09.370232   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 16:56:09.409657   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:09.615575   12816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0416 16:56:09.726879   12816 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0416 16:56:09.737760   12816 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0416 16:56:09.746450   12816 start.go:562] Will wait 60s for crictl version
	I0416 16:56:09.755840   12816 ssh_runner.go:195] Run: which crictl
	I0416 16:56:09.771470   12816 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 16:56:09.827603   12816 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0416 16:56:09.836477   12816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 16:56:09.874967   12816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 16:56:09.907967   12816 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
	I0416 16:56:09.908249   12816 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:6f:a4 Flags:up|broadcast|multicast|running}
	I0416 16:56:09.917049   12816 ip.go:210] interface addr: fe80::6b96:eca7:5afa:def5/64
	I0416 16:56:09.917049   12816 ip.go:210] interface addr: 172.19.80.1/20
	I0416 16:56:09.924842   12816 ssh_runner.go:195] Run: grep 172.19.80.1	host.minikube.internal$ /etc/hosts
	I0416 16:56:09.931830   12816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:56:09.968931   12816 kubeadm.go:877] updating cluster {Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 16:56:09.968931   12816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:56:09.975955   12816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 16:56:09.996899   12816 docker.go:685] Got preloaded images: 
	I0416 16:56:09.996899   12816 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0416 16:56:10.008276   12816 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0416 16:56:10.035609   12816 ssh_runner.go:195] Run: which lz4
	I0416 16:56:10.042582   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0416 16:56:10.050849   12816 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 16:56:10.058074   12816 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 16:56:10.058074   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0416 16:56:11.721910   12816 docker.go:649] duration metric: took 1.6789563s to copy over tarball
	I0416 16:56:11.731181   12816 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 16:56:20.333529   12816 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.60186s)
	I0416 16:56:20.333529   12816 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 16:56:20.400516   12816 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0416 16:56:20.419486   12816 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0416 16:56:20.469018   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:20.655543   12816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 16:56:23.229259   12816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5734984s)
	I0416 16:56:23.240705   12816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 16:56:23.262332   12816 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0416 16:56:23.262383   12816 cache_images.go:84] Images are preloaded, skipping loading
	I0416 16:56:23.262383   12816 kubeadm.go:928] updating node { 172.19.81.207 8443 v1.29.3 docker true true} ...
	I0416 16:56:23.262383   12816 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-022600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.81.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 16:56:23.270008   12816 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0416 16:56:23.307277   12816 cni.go:84] Creating CNI manager for ""
	I0416 16:56:23.307277   12816 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0416 16:56:23.307362   12816 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 16:56:23.307406   12816 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.81.207 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-022600 NodeName:ha-022600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.81.207"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.81.207 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 16:56:23.307691   12816 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.81.207
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-022600"
	  kubeletExtraArgs:
	    node-ip: 172.19.81.207
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.81.207"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 16:56:23.307749   12816 kube-vip.go:111] generating kube-vip config ...
	I0416 16:56:23.318492   12816 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0416 16:56:23.343950   12816 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0416 16:56:23.344258   12816 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.95.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0416 16:56:23.353585   12816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 16:56:23.370542   12816 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 16:56:23.379813   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0416 16:56:23.397865   12816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0416 16:56:23.432291   12816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 16:56:23.462868   12816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0416 16:56:23.492579   12816 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1351 bytes)
	I0416 16:56:23.534977   12816 ssh_runner.go:195] Run: grep 172.19.95.254	control-plane.minikube.internal$ /etc/hosts
	I0416 16:56:23.542734   12816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.95.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:56:23.575719   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:23.754395   12816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 16:56:23.781462   12816 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600 for IP: 172.19.81.207
	I0416 16:56:23.781462   12816 certs.go:194] generating shared ca certs ...
	I0416 16:56:23.781462   12816 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:23.782411   12816 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0416 16:56:23.782411   12816 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0416 16:56:23.783651   12816 certs.go:256] generating profile certs ...
	I0416 16:56:23.784402   12816 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.key
	I0416 16:56:23.784569   12816 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.crt with IP's: []
	I0416 16:56:23.984047   12816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.crt ...
	I0416 16:56:23.984047   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.crt: {Name:mk3ebdcb7f076a09a399313f7ed3edf14403a6a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:23.985977   12816 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.key ...
	I0416 16:56:23.985977   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.key: {Name:mk94343a485b04f4b25a0ccd3245e197e7ecbec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:23.986215   12816 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648
	I0416 16:56:23.987265   12816 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.81.207 172.19.95.254]
	I0416 16:56:24.317716   12816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648 ...
	I0416 16:56:24.317716   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648: {Name:mk30f7000427979a1bcf8d6fc3995d1f7ccc170c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.319797   12816 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648 ...
	I0416 16:56:24.319797   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648: {Name:mk95e9e3e0f96031ef005f6c36470c216303a111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.320163   12816 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt
	I0416 16:56:24.331288   12816 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key
	I0416 16:56:24.332214   12816 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key
	I0416 16:56:24.332214   12816 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt with IP's: []
	I0416 16:56:24.406574   12816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt ...
	I0416 16:56:24.406574   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt: {Name:mk73158a02cd8861e90a3b76d50704b360c358ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.407013   12816 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key ...
	I0416 16:56:24.407013   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key: {Name:mk6842e2af8fadaf278ec7592edd5bd96f07c8ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.408078   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 16:56:24.408945   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0416 16:56:24.409732   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0416 16:56:24.417870   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0416 16:56:24.418145   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem (1338 bytes)
	W0416 16:56:24.418533   12816 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324_empty.pem, impossibly tiny 0 bytes
	I0416 16:56:24.418533   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0416 16:56:24.418811   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0416 16:56:24.418990   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0416 16:56:24.419161   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0416 16:56:24.419368   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem (1708 bytes)
	I0416 16:56:24.419647   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /usr/share/ca-certificates/53242.pem
	I0416 16:56:24.419767   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:24.419867   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem -> /usr/share/ca-certificates/5324.pem
	I0416 16:56:24.420003   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 16:56:24.466985   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 16:56:24.509816   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 16:56:24.554817   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 16:56:24.603006   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0416 16:56:24.646596   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 16:56:24.694120   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 16:56:24.741669   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 16:56:24.785888   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /usr/share/ca-certificates/53242.pem (1708 bytes)
	I0416 16:56:24.829403   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 16:56:24.891821   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem --> /usr/share/ca-certificates/5324.pem (1338 bytes)
	I0416 16:56:24.933883   12816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 16:56:24.975091   12816 ssh_runner.go:195] Run: openssl version
	I0416 16:56:24.994129   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/53242.pem && ln -fs /usr/share/ca-certificates/53242.pem /etc/ssl/certs/53242.pem"
	I0416 16:56:25.021821   12816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/53242.pem
	I0416 16:56:25.028512   12816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:35 /usr/share/ca-certificates/53242.pem
	I0416 16:56:25.037989   12816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/53242.pem
	I0416 16:56:25.054924   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/53242.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 16:56:25.080011   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 16:56:25.106815   12816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:25.113980   12816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:25.126339   12816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:25.144599   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 16:56:25.170309   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5324.pem && ln -fs /usr/share/ca-certificates/5324.pem /etc/ssl/certs/5324.pem"
	I0416 16:56:25.199080   12816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5324.pem
	I0416 16:56:25.206080   12816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:35 /usr/share/ca-certificates/5324.pem
	I0416 16:56:25.214031   12816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5324.pem
	I0416 16:56:25.237026   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5324.pem /etc/ssl/certs/51391683.0"
	I0416 16:56:25.266837   12816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 16:56:25.273408   12816 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 16:56:25.273858   12816 kubeadm.go:391] StartCluster: {Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clu
sterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:56:25.281991   12816 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0416 16:56:25.314891   12816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0416 16:56:25.342248   12816 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 16:56:25.368032   12816 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 16:56:25.385737   12816 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 16:56:25.385737   12816 kubeadm.go:156] found existing configuration files:
	
	I0416 16:56:25.393851   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 16:56:25.410393   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 16:56:25.421874   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 16:56:25.453762   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 16:56:25.468769   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 16:56:25.477353   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 16:56:25.501898   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 16:56:25.515888   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 16:56:25.524885   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 16:56:25.548518   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 16:56:25.563660   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 16:56:25.572269   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 16:56:25.587981   12816 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 16:56:25.791977   12816 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 16:56:25.791977   12816 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 16:56:25.958638   12816 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 16:56:25.959035   12816 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 16:56:25.959403   12816 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 16:56:26.228464   12816 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 16:56:26.229544   12816 out.go:204]   - Generating certificates and keys ...
	I0416 16:56:26.229862   12816 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 16:56:26.230882   12816 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 16:56:26.359024   12816 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0416 16:56:26.583044   12816 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0416 16:56:26.715543   12816 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0416 16:56:27.014892   12816 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0416 16:56:27.414264   12816 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0416 16:56:27.414467   12816 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-022600 localhost] and IPs [172.19.81.207 127.0.0.1 ::1]
	I0416 16:56:27.642396   12816 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0416 16:56:27.642770   12816 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-022600 localhost] and IPs [172.19.81.207 127.0.0.1 ::1]
	I0416 16:56:27.844566   12816 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0416 16:56:28.089475   12816 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0416 16:56:28.543900   12816 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0416 16:56:28.548586   12816 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 16:56:29.051829   12816 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 16:56:29.485679   12816 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 16:56:29.830737   12816 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 16:56:30.055972   12816 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 16:56:30.305118   12816 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 16:56:30.305118   12816 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 16:56:30.310446   12816 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 16:56:30.311113   12816 out.go:204]   - Booting up control plane ...
	I0416 16:56:30.311289   12816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 16:56:30.311970   12816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 16:56:30.317049   12816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 16:56:30.342443   12816 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 16:56:30.345140   12816 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 16:56:30.345140   12816 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 16:56:30.526725   12816 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 16:56:37.142045   12816 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.615653 seconds
	I0416 16:56:37.159025   12816 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 16:56:37.175108   12816 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 16:56:37.707867   12816 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 16:56:37.708715   12816 kubeadm.go:309] [mark-control-plane] Marking the node ha-022600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 16:56:38.222729   12816 kubeadm.go:309] [bootstrap-token] Using token: a3r5qn.ikva200bfcppykg5
	I0416 16:56:38.223819   12816 out.go:204]   - Configuring RBAC rules ...
	I0416 16:56:38.224231   12816 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 16:56:38.232416   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 16:56:38.244982   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 16:56:38.249926   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 16:56:38.257723   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 16:56:38.262029   12816 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 16:56:38.279883   12816 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 16:56:38.592701   12816 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 16:56:38.638273   12816 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 16:56:38.639572   12816 kubeadm.go:309] 
	I0416 16:56:38.640154   12816 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 16:56:38.640230   12816 kubeadm.go:309] 
	I0416 16:56:38.640982   12816 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 16:56:38.641038   12816 kubeadm.go:309] 
	I0416 16:56:38.641299   12816 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 16:56:38.641581   12816 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 16:56:38.641765   12816 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 16:56:38.641765   12816 kubeadm.go:309] 
	I0416 16:56:38.641989   12816 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 16:56:38.642031   12816 kubeadm.go:309] 
	I0416 16:56:38.642184   12816 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 16:56:38.642228   12816 kubeadm.go:309] 
	I0416 16:56:38.642350   12816 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 16:56:38.642660   12816 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 16:56:38.642862   12816 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 16:56:38.642900   12816 kubeadm.go:309] 
	I0416 16:56:38.643166   12816 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 16:56:38.643426   12816 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 16:56:38.643426   12816 kubeadm.go:309] 
	I0416 16:56:38.643613   12816 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token a3r5qn.ikva200bfcppykg5 \
	I0416 16:56:38.643867   12816 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c \
	I0416 16:56:38.643909   12816 kubeadm.go:309] 	--control-plane 
	I0416 16:56:38.643961   12816 kubeadm.go:309] 
	I0416 16:56:38.644233   12816 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 16:56:38.644272   12816 kubeadm.go:309] 
	I0416 16:56:38.644444   12816 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token a3r5qn.ikva200bfcppykg5 \
	I0416 16:56:38.644734   12816 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c 
	I0416 16:56:38.647455   12816 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 16:56:38.647488   12816 cni.go:84] Creating CNI manager for ""
	I0416 16:56:38.647539   12816 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0416 16:56:38.648246   12816 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0416 16:56:38.657141   12816 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0416 16:56:38.671263   12816 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0416 16:56:38.671263   12816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0416 16:56:38.722410   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0416 16:56:39.265655   12816 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 16:56:39.279279   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:39.279279   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-022600 minikube.k8s.io/updated_at=2024_04_16T16_56_39_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4 minikube.k8s.io/name=ha-022600 minikube.k8s.io/primary=true
	I0416 16:56:39.290244   12816 ops.go:34] apiserver oom_adj: -16
	I0416 16:56:39.441163   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:39.950155   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:40.453751   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:40.955147   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:41.455931   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:41.953044   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:42.454696   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:42.949299   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:43.454962   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:43.953447   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:44.456402   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:44.956686   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:45.449476   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:45.951602   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:46.451988   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:46.949212   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:47.449356   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:47.950703   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:48.458777   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:48.956811   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:49.456669   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:49.943595   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:50.443906   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:50.950503   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:51.454863   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:51.944285   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:52.083562   12816 kubeadm.go:1107] duration metric: took 12.8170858s to wait for elevateKubeSystemPrivileges
	W0416 16:56:52.083816   12816 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 16:56:52.083816   12816 kubeadm.go:393] duration metric: took 26.808438s to StartCluster
	I0416 16:56:52.083816   12816 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:52.084214   12816 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:56:52.086643   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:52.088384   12816 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 16:56:52.088384   12816 start.go:240] waiting for startup goroutines ...
	I0416 16:56:52.088384   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0416 16:56:52.088384   12816 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 16:56:52.088630   12816 addons.go:69] Setting storage-provisioner=true in profile "ha-022600"
	I0416 16:56:52.088732   12816 addons.go:234] Setting addon storage-provisioner=true in "ha-022600"
	I0416 16:56:52.088732   12816 addons.go:69] Setting default-storageclass=true in profile "ha-022600"
	I0416 16:56:52.088850   12816 host.go:66] Checking if "ha-022600" exists ...
	I0416 16:56:52.088964   12816 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-022600"
	I0416 16:56:52.088964   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:56:52.090289   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:52.090671   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:52.207597   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.80.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0416 16:56:52.469504   12816 start.go:946] {"host.minikube.internal": 172.19.80.1} host record injected into CoreDNS's ConfigMap
	I0416 16:56:54.164683   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:54.164683   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:54.165583   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:54.165635   12816 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:56:54.165635   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:54.166734   12816 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 16:56:54.166340   12816 kapi.go:59] client config for ha-022600: &rest.Config{Host:"https://172.19.95.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-022600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-022600\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef16c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 16:56:54.167133   12816 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 16:56:54.167133   12816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 16:56:54.167133   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:54.167791   12816 cert_rotation.go:137] Starting client certificate rotation controller
	I0416 16:56:54.168180   12816 addons.go:234] Setting addon default-storageclass=true in "ha-022600"
	I0416 16:56:54.168347   12816 host.go:66] Checking if "ha-022600" exists ...
	I0416 16:56:54.169251   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:56.312581   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:56.312988   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:56.313046   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:56.313270   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:56.313270   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:56.313270   12816 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 16:56:56.313270   12816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 16:56:56.313270   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:58.330392   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:58.330392   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:58.330966   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:58.735727   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:56:58.735876   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:58.736103   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:56:58.898469   12816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 16:57:00.675802   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:57:00.675802   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:00.676245   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:57:00.828151   12816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 16:57:01.248041   12816 round_trippers.go:463] GET https://172.19.95.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0416 16:57:01.248041   12816 round_trippers.go:469] Request Headers:
	I0416 16:57:01.248041   12816 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:57:01.248041   12816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:57:01.261890   12816 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0416 16:57:01.262478   12816 round_trippers.go:463] PUT https://172.19.95.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0416 16:57:01.262478   12816 round_trippers.go:469] Request Headers:
	I0416 16:57:01.262478   12816 round_trippers.go:473]     Content-Type: application/json
	I0416 16:57:01.262478   12816 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:57:01.262478   12816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:57:01.268964   12816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 16:57:01.269995   12816 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0416 16:57:01.270495   12816 addons.go:505] duration metric: took 9.181591s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0416 16:57:01.270576   12816 start.go:245] waiting for cluster config update ...
	I0416 16:57:01.270618   12816 start.go:254] writing updated cluster config ...
	I0416 16:57:01.271859   12816 out.go:177] 
	I0416 16:57:01.284169   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:57:01.284169   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:57:01.285951   12816 out.go:177] * Starting "ha-022600-m02" control-plane node in "ha-022600" cluster
	I0416 16:57:01.286952   12816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:57:01.286952   12816 cache.go:56] Caching tarball of preloaded images
	I0416 16:57:01.286952   12816 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 16:57:01.286952   12816 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 16:57:01.286952   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:57:01.296247   12816 start.go:360] acquireMachinesLock for ha-022600-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:57:01.297324   12816 start.go:364] duration metric: took 1.0773ms to acquireMachinesLock for "ha-022600-m02"
	I0416 16:57:01.297559   12816 start.go:93] Provisioning new machine with config: &{Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 16:57:01.297559   12816 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0416 16:57:01.297559   12816 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 16:57:01.297559   12816 start.go:159] libmachine.API.Create for "ha-022600" (driver="hyperv")
	I0416 16:57:01.297559   12816 client.go:168] LocalClient.Create starting
	I0416 16:57:01.298838   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0416 16:57:01.299147   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:57:01.299147   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:57:01.299293   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0416 16:57:01.299468   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:57:01.299468   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:57:01.299468   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0416 16:57:03.017072   12816 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0416 16:57:03.017279   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:03.017366   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0416 16:57:04.580895   12816 main.go:141] libmachine: [stdout =====>] : False
	
	I0416 16:57:04.580895   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:04.580895   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:57:05.984295   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:57:05.984295   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:05.984295   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:57:09.314760   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:57:09.314760   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:09.316740   12816 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 16:57:09.669552   12816 main.go:141] libmachine: Creating SSH key...
	I0416 16:57:10.010472   12816 main.go:141] libmachine: Creating VM...
	I0416 16:57:10.010472   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:57:12.690022   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:57:12.690022   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:12.690107   12816 main.go:141] libmachine: Using switch "Default Switch"
	I0416 16:57:12.690185   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:57:14.267157   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:57:14.267157   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:14.267157   12816 main.go:141] libmachine: Creating VHD
	I0416 16:57:14.267157   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0416 16:57:17.749511   12816 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : FE960248-03C1-43D6-B7AE-A60D4C86308B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0416 16:57:17.749511   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:17.749511   12816 main.go:141] libmachine: Writing magic tar header
	I0416 16:57:17.749511   12816 main.go:141] libmachine: Writing SSH key tar header
	I0416 16:57:17.758158   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0416 16:57:20.709379   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:20.709950   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:20.710019   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\disk.vhd' -SizeBytes 20000MB
	I0416 16:57:23.025729   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:23.025729   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:23.025729   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-022600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0416 16:57:26.131923   12816 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-022600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0416 16:57:26.131923   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:26.131923   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-022600-m02 -DynamicMemoryEnabled $false
	I0416 16:57:28.159153   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:28.159229   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:28.159409   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-022600-m02 -Count 2
	I0416 16:57:30.126033   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:30.126033   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:30.126033   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-022600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\boot2docker.iso'
	I0416 16:57:32.420739   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:32.421735   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:32.421878   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-022600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\disk.vhd'
	I0416 16:57:34.779822   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:34.779822   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:34.779822   12816 main.go:141] libmachine: Starting VM...
	I0416 16:57:34.780971   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-022600-m02
	I0416 16:57:37.369505   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:37.369687   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:37.369687   12816 main.go:141] libmachine: Waiting for host to start...
	I0416 16:57:37.369767   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:39.415029   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:39.415029   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:39.415286   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:41.685132   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:41.685132   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:42.700464   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:44.674039   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:44.674039   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:44.674039   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:46.993492   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:46.993492   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:48.000886   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:49.992438   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:49.992894   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:49.992930   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:52.274971   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:52.274971   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:53.290891   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:55.287716   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:55.287962   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:55.288037   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:57.564053   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:57.564053   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:58.572803   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:00.584542   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:00.584542   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:00.584542   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:02.905327   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:02.905327   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:02.905391   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:04.899133   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:04.899133   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:04.899479   12816 machine.go:94] provisionDockerMachine start ...
	I0416 16:58:04.899479   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:06.914221   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:06.914221   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:06.914869   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:09.273511   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:09.273546   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:09.277783   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:09.278406   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:09.278406   12816 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 16:58:09.413281   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 16:58:09.413281   12816 buildroot.go:166] provisioning hostname "ha-022600-m02"
	I0416 16:58:09.413281   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:11.438626   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:11.438626   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:11.439079   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:13.801181   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:13.801181   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:13.805295   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:13.805684   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:13.805684   12816 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-022600-m02 && echo "ha-022600-m02" | sudo tee /etc/hostname
	I0416 16:58:13.957933   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-022600-m02
	
	I0416 16:58:13.958021   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:15.863768   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:15.863768   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:15.863768   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:18.176996   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:18.178002   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:18.182057   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:18.182681   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:18.182681   12816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-022600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-022600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-022600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:58:18.315751   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:58:18.315853   12816 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 16:58:18.315853   12816 buildroot.go:174] setting up certificates
	I0416 16:58:18.315853   12816 provision.go:84] configureAuth start
	I0416 16:58:18.315853   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:20.243862   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:20.243928   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:20.243928   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:22.525833   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:22.525945   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:22.526057   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:24.418671   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:24.418671   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:24.418894   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:26.735560   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:26.735560   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:26.735560   12816 provision.go:143] copyHostCerts
	I0416 16:58:26.736546   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 16:58:26.736627   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 16:58:26.736627   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 16:58:26.736627   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 16:58:26.737290   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 16:58:26.737900   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 16:58:26.737900   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 16:58:26.738191   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 16:58:26.738908   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 16:58:26.738977   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 16:58:26.738977   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 16:58:26.738977   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 16:58:26.739652   12816 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-022600-m02 san=[127.0.0.1 172.19.80.125 ha-022600-m02 localhost minikube]
	I0416 16:58:26.917277   12816 provision.go:177] copyRemoteCerts
	I0416 16:58:26.926308   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:58:26.926600   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:28.829360   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:28.829360   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:28.830343   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:31.113681   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:31.113681   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:31.113681   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:58:31.229222   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3026703s)
	I0416 16:58:31.229222   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 16:58:31.229700   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 16:58:31.279666   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 16:58:31.280307   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0416 16:58:31.328101   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 16:58:31.328245   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 16:58:31.382563   12816 provision.go:87] duration metric: took 13.065969s to configureAuth
	I0416 16:58:31.382563   12816 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:58:31.383343   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:58:31.383343   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:33.331199   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:33.331199   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:33.331275   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:35.653673   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:35.653721   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:35.656855   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:35.657430   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:35.657430   12816 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 16:58:35.781565   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 16:58:35.781565   12816 buildroot.go:70] root file system type: tmpfs
	I0416 16:58:35.781565   12816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 16:58:35.782090   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:37.695478   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:37.695478   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:37.696344   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:39.956169   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:39.956169   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:39.961057   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:39.961515   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:39.961564   12816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.81.207"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 16:58:40.123664   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.81.207
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 16:58:40.123818   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:42.064878   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:42.064974   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:42.065152   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:44.326252   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:44.326252   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:44.330103   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:44.330731   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:44.330731   12816 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 16:58:46.283136   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 16:58:46.283253   12816 machine.go:97] duration metric: took 41.3814279s to provisionDockerMachine
	I0416 16:58:46.283253   12816 client.go:171] duration metric: took 1m44.9797412s to LocalClient.Create
	I0416 16:58:46.283253   12816 start.go:167] duration metric: took 1m44.9797412s to libmachine.API.Create "ha-022600"
	I0416 16:58:46.283253   12816 start.go:293] postStartSetup for "ha-022600-m02" (driver="hyperv")
	I0416 16:58:46.283345   12816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:58:46.292724   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:58:46.292724   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:48.207625   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:48.207625   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:48.207625   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:50.480821   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:50.480821   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:50.480821   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:58:50.575284   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2823171s)
	I0416 16:58:50.584260   12816 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:58:50.591292   12816 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:58:50.591292   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 16:58:50.591292   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 16:58:50.591900   12816 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 16:58:50.591900   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 16:58:50.601073   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 16:58:50.618807   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 16:58:50.671301   12816 start.go:296] duration metric: took 4.3877068s for postStartSetup
	I0416 16:58:50.673161   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:52.621684   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:52.622252   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:52.622252   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:54.923435   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:54.923435   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:54.923763   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:58:54.926483   12816 start.go:128] duration metric: took 1m53.622481s to createHost
	I0416 16:58:54.926657   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:56.793105   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:56.793105   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:56.793184   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:59.024255   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:59.025184   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:59.029108   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:59.029633   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:59.029730   12816 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 16:58:59.149333   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713286739.315259098
	
	I0416 16:58:59.149333   12816 fix.go:216] guest clock: 1713286739.315259098
	I0416 16:58:59.149333   12816 fix.go:229] Guest: 2024-04-16 16:58:59.315259098 +0000 UTC Remote: 2024-04-16 16:58:54.9265716 +0000 UTC m=+304.925199701 (delta=4.388687498s)
	I0416 16:58:59.149333   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:01.054656   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:01.054831   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:01.054831   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:03.303195   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:03.303195   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:03.307071   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:59:03.307459   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:59:03.307531   12816 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713286739
	I0416 16:59:03.449024   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 16:58:59 UTC 2024
	
	I0416 16:59:03.449024   12816 fix.go:236] clock set: Tue Apr 16 16:58:59 UTC 2024
	 (err=<nil>)
	I0416 16:59:03.449024   12816 start.go:83] releasing machines lock for "ha-022600-m02", held for 2m2.1447745s
	I0416 16:59:03.450039   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:05.434998   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:05.434998   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:05.434998   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:07.737918   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:07.737918   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:07.739042   12816 out.go:177] * Found network options:
	I0416 16:59:07.739784   12816 out.go:177]   - NO_PROXY=172.19.81.207
	W0416 16:59:07.740404   12816 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 16:59:07.741027   12816 out.go:177]   - NO_PROXY=172.19.81.207
	W0416 16:59:07.741505   12816 proxy.go:119] fail to check proxy env: Error ip not in block
	W0416 16:59:07.742708   12816 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 16:59:07.744988   12816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:59:07.745153   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:07.752817   12816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0416 16:59:07.752817   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:09.758953   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:12.157582   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:12.158536   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:12.159044   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:59:12.184719   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:12.184719   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:12.185179   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:59:12.257436   12816 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5043642s)
	W0416 16:59:12.257436   12816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:59:12.266545   12816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:59:12.367206   12816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 16:59:12.367296   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:59:12.367330   12816 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6219642s)
	I0416 16:59:12.367330   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:59:12.423201   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 16:59:12.453988   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 16:59:12.472992   12816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 16:59:12.482991   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 16:59:12.510864   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:59:12.538866   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 16:59:12.565866   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:59:12.597751   12816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:59:12.622761   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 16:59:12.648905   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 16:59:12.674904   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 16:59:12.713452   12816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:59:12.741495   12816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:59:12.768497   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:59:12.975524   12816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 16:59:13.011635   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:59:13.023647   12816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 16:59:13.058146   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:59:13.091991   12816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 16:59:13.139058   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:59:13.173081   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:59:13.208242   12816 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 16:59:13.259511   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:59:13.282094   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:59:13.329081   12816 ssh_runner.go:195] Run: which cri-dockerd
	I0416 16:59:13.344832   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 16:59:13.362131   12816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 16:59:13.403377   12816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 16:59:13.597444   12816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 16:59:13.768147   12816 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 16:59:13.768278   12816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 16:59:13.808294   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:59:13.987216   12816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 17:00:15.104612   12816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1138396s)
	I0416 17:00:15.115049   12816 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0416 17:00:15.145752   12816 out.go:177] 
	W0416 17:00:15.146473   12816 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 16 16:58:45 ha-022600-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.076842920Z" level=info msg="Starting up"
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.077687177Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.078706068Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.109665355Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138411128Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138448735Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138508447Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138523049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138600164Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138632670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138848110Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138955930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139030244Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139045347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139142365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139433520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142495192Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142588309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142778845Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142795748Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143044695Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143174419Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143191422Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.152862930Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153144583Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153313214Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153337519Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153354522Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153467543Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153957434Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154159572Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154195179Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154212082Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154230586Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154258491Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154272393Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154287696Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154303599Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154317302Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154330504Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154344107Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154373612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154392516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154406618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154421121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154434024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154447526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154460128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154474031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154498536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154514539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154525841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154555046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154568249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154583952Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154604755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154629960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154642062Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154700973Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154916114Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155014532Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155030135Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155203567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155302486Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155325090Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155706861Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155796078Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155907599Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155947306Z" level=info msg="containerd successfully booted in 0.047582s"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.119001526Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.129323458Z" level=info msg="Loading containers: start."
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.358382320Z" level=info msg="Loading containers: done."
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.377033580Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.377149301Z" level=info msg="Daemon has completed initialization"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.447556885Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 16:58:46 ha-022600-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.449134569Z" level=info msg="API listen on [::]:2376"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.178053148Z" level=info msg="Processing signal 'terminated'"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.179830517Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 16:59:14 ha-022600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.180814055Z" level=info msg="Daemon shutdown complete"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.181020363Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.181054564Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 16:59:15 ha-022600-m02 dockerd[1019]: time="2024-04-16T16:59:15.248212596Z" level=info msg="Starting up"
	Apr 16 17:00:15 ha-022600-m02 dockerd[1019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0416 17:00:15.146611   12816 out.go:239] * 
	W0416 17:00:15.147806   12816 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 17:00:15.148383   12816 out.go:177] 
	
	
	==> Docker <==
	Apr 16 17:13:47 ha-022600 dockerd[1325]: 2024/04/16 17:13:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:17:46 ha-022600 dockerd[1325]: 2024/04/16 17:17:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:17:46 ha-022600 dockerd[1325]: 2024/04/16 17:17:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:17:47 ha-022600 dockerd[1325]: 2024/04/16 17:17:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:17:47 ha-022600 dockerd[1325]: 2024/04/16 17:17:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:17:47 ha-022600 dockerd[1325]: 2024/04/16 17:17:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:17:47 ha-022600 dockerd[1325]: 2024/04/16 17:17:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:17:47 ha-022600 dockerd[1325]: 2024/04/16 17:17:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:17:47 ha-022600 dockerd[1325]: 2024/04/16 17:17:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:18:34 ha-022600 dockerd[1325]: 2024/04/16 17:18:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:18:34 ha-022600 dockerd[1325]: 2024/04/16 17:18:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:18:34 ha-022600 dockerd[1325]: 2024/04/16 17:18:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:18:34 ha-022600 dockerd[1325]: 2024/04/16 17:18:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:18:34 ha-022600 dockerd[1325]: 2024/04/16 17:18:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:18:34 ha-022600 dockerd[1325]: 2024/04/16 17:18:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:18:34 ha-022600 dockerd[1325]: 2024/04/16 17:18:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:18:34 ha-022600 dockerd[1325]: 2024/04/16 17:18:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:19:36 ha-022600 dockerd[1325]: 2024/04/16 17:19:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:19:36 ha-022600 dockerd[1325]: 2024/04/16 17:19:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:19:36 ha-022600 dockerd[1325]: 2024/04/16 17:19:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:19:36 ha-022600 dockerd[1325]: 2024/04/16 17:19:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:19:36 ha-022600 dockerd[1325]: 2024/04/16 17:19:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:19:36 ha-022600 dockerd[1325]: 2024/04/16 17:19:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:19:36 ha-022600 dockerd[1325]: 2024/04/16 17:19:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:19:36 ha-022600 dockerd[1325]: 2024/04/16 17:19:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d38b1a5f4caa8       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago      Running             busybox                   0                   8a4de3aa24af1       busybox-7fdf7869d9-rpfpf
	3fe545bfad4e6       cbb01a7bd410d                                                                                         24 minutes ago      Running             coredns                   0                   093278b3840ef       coredns-76f75df574-qm89x
	979dee88be2b4       cbb01a7bd410d                                                                                         24 minutes ago      Running             coredns                   0                   4ad38b0d59335       coredns-76f75df574-ww2r6
	257879ecf06b2       6e38f40d628db                                                                                         24 minutes ago      Running             storage-provisioner       0                   bf991c3e34e2d       storage-provisioner
	be245de9ef545       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              24 minutes ago      Running             kindnet-cni               0                   92c35b3fd0967       kindnet-mwqvl
	05db92f49e0df       a1d263b5dc5b0                                                                                         24 minutes ago      Running             kube-proxy                0                   12380f49c1509       kube-proxy-2vddt
	d1ba82cd26254       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     24 minutes ago      Running             kube-vip                  0                   fa2c75c4c8d59       kube-vip-ha-022600
	a7fb69539df62       6052a25da3f97                                                                                         24 minutes ago      Running             kube-controller-manager   0                   b536621e20d4b       kube-controller-manager-ha-022600
	4fd5df8c9fd37       39f995c9f1996                                                                                         24 minutes ago      Running             kube-apiserver            0                   5a7a1e80caeb4       kube-apiserver-ha-022600
	e042d71e8b0e8       8c390d98f50c0                                                                                         24 minutes ago      Running             kube-scheduler            0                   5a2551c91a1b6       kube-scheduler-ha-022600
	c29b0762ff0bf       3861cfcd7c04c                                                                                         24 minutes ago      Running             etcd                      0                   c8a9aa3126cf5       etcd-ha-022600
	
	
	==> coredns [3fe545bfad4e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = db872c9fdf31f8d8ff61123f2a1e38a38b951fa043b9e42cdb76f01d23889e560885a7bdef735e757fd28e65f13e44b1d5d7b5def31861f6a98cd0279fbc18c8
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:55981 - 4765 "HINFO IN 3735046377920793891.8143170502200932773. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.058936595s
	[INFO] 10.244.0.4:43350 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000388921s
	[INFO] 10.244.0.4:35317 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.052221997s
	[INFO] 10.244.0.4:52074 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.040069369s
	[INFO] 10.244.0.4:49068 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.053312593s
	[INFO] 10.244.0.4:54711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123507s
	[INFO] 10.244.0.4:44694 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037006811s
	[INFO] 10.244.0.4:33399 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124606s
	[INFO] 10.244.0.4:37329 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000241612s
	[INFO] 10.244.0.4:57333 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131407s
	[INFO] 10.244.0.4:38806 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060403s
	[INFO] 10.244.0.4:48948 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000263914s
	[INFO] 10.244.0.4:51825 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000177309s
	[INFO] 10.244.0.4:53272 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00018301s
	
	
	==> coredns [979dee88be2b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = db872c9fdf31f8d8ff61123f2a1e38a38b951fa043b9e42cdb76f01d23889e560885a7bdef735e757fd28e65f13e44b1d5d7b5def31861f6a98cd0279fbc18c8
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50127 - 24072 "HINFO IN 7665836187497317301.2285362183679153792. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027543487s
	[INFO] 10.244.0.4:34822 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000224011s
	[INFO] 10.244.0.4:48911 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000349218s
	[INFO] 10.244.0.4:43369 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.023699624s
	[INFO] 10.244.0.4:56309 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000258914s
	[INFO] 10.244.0.4:36791 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.003463479s
	[INFO] 10.244.0.4:55996 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000301816s
	[INFO] 10.244.0.4:35967 - 5 "PTR IN 1.80.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000116506s
	
	
	==> describe nodes <==
	Name:               ha-022600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-022600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=ha-022600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T16_56_39_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 16:56:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-022600
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 17:21:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 17:16:34 +0000   Tue, 16 Apr 2024 16:56:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 17:16:34 +0000   Tue, 16 Apr 2024 16:56:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 17:16:34 +0000   Tue, 16 Apr 2024 16:56:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 17:16:34 +0000   Tue, 16 Apr 2024 16:57:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.81.207
	  Hostname:    ha-022600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b4674338fa494bbcb2e21e2b4385c5e1
	  System UUID:                201025fc-0c03-cc49-a194-29d6500971a2
	  Boot ID:                    6ae5bedd-6e8e-4f58-b08c-8e9912fd04de
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-rpfpf             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-76f75df574-qm89x             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     24m
	  kube-system                 coredns-76f75df574-ww2r6             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     24m
	  kube-system                 etcd-ha-022600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         24m
	  kube-system                 kindnet-mwqvl                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-apiserver-ha-022600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-controller-manager-ha-022600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-proxy-2vddt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-scheduler-ha-022600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-vip-ha-022600                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24m   kube-proxy       
	  Normal  Starting                 24m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  24m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  24m   kubelet          Node ha-022600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m   kubelet          Node ha-022600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m   kubelet          Node ha-022600 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24m   node-controller  Node ha-022600 event: Registered Node ha-022600 in Controller
	  Normal  NodeReady                24m   kubelet          Node ha-022600 status is now: NodeReady
	
	
	Name:               ha-022600-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-022600-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=ha-022600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_16T17_16_38_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 17:16:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-022600-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 17:21:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 17:17:08 +0000   Tue, 16 Apr 2024 17:16:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 17:17:08 +0000   Tue, 16 Apr 2024 17:16:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 17:17:08 +0000   Tue, 16 Apr 2024 17:16:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 17:17:08 +0000   Tue, 16 Apr 2024 17:16:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.93.94
	  Hostname:    ha-022600-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 cefa169b716045589e59382d0939ad48
	  System UUID:                25782c5b-4e02-0547-b063-db6b9c5f1f5b
	  Boot ID:                    e7c67d41-aa2d-47a1-952b-fa7ff5422e05
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7c2px       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m32s
	  kube-system                 kube-proxy-ss5lp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m22s                  kube-proxy       
	  Normal  Starting                 4m32s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m32s (x2 over 4m32s)  kubelet          Node ha-022600-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m32s (x2 over 4m32s)  kubelet          Node ha-022600-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m32s (x2 over 4m32s)  kubelet          Node ha-022600-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m28s                  node-controller  Node ha-022600-m03 event: Registered Node ha-022600-m03 in Controller
	  Normal  NodeReady                4m15s                  kubelet          Node ha-022600-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.656516] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr16 16:55] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.165290] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[Apr16 16:56] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.091843] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.493988] systemd-fstab-generator[987]: Ignoring "noauto" option for root device
	[  +0.172637] systemd-fstab-generator[999]: Ignoring "noauto" option for root device
	[  +0.230010] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	[  +2.695048] systemd-fstab-generator[1185]: Ignoring "noauto" option for root device
	[  +0.219400] systemd-fstab-generator[1197]: Ignoring "noauto" option for root device
	[  +0.196554] systemd-fstab-generator[1209]: Ignoring "noauto" option for root device
	[  +0.267217] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
	[ +11.053282] systemd-fstab-generator[1317]: Ignoring "noauto" option for root device
	[  +0.095458] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.012264] systemd-fstab-generator[1521]: Ignoring "noauto" option for root device
	[  +6.758798] systemd-fstab-generator[1720]: Ignoring "noauto" option for root device
	[  +0.093227] kauditd_printk_skb: 73 callbacks suppressed
	[  +7.850543] systemd-fstab-generator[2213]: Ignoring "noauto" option for root device
	[  +0.130310] kauditd_printk_skb: 72 callbacks suppressed
	[ +15.381320] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.386371] kauditd_printk_skb: 29 callbacks suppressed
	[Apr16 17:00] hrtimer: interrupt took 5042261 ns
	[  +0.908827] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [c29b0762ff0b] <==
	{"level":"info","ts":"2024-04-16T17:06:33.350784Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":964}
	{"level":"info","ts":"2024-04-16T17:06:33.393755Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":964,"took":"42.49244ms","hash":1730924367,"current-db-size-bytes":2433024,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2433024,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-04-16T17:06:33.395361Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1730924367,"revision":964,"compact-revision":-1}
	{"level":"info","ts":"2024-04-16T17:11:33.360995Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1502}
	{"level":"info","ts":"2024-04-16T17:11:33.366072Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1502,"took":"4.116913ms","hash":127222243,"current-db-size-bytes":2433024,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1818624,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-04-16T17:11:33.366162Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":127222243,"revision":1502,"compact-revision":964}
	{"level":"info","ts":"2024-04-16T17:15:11.421098Z","caller":"traceutil/trace.go:171","msg":"trace[1208553513] transaction","detail":"{read_only:false; response_revision:2431; number_of_response:1; }","duration":"155.410586ms","start":"2024-04-16T17:15:11.265667Z","end":"2024-04-16T17:15:11.421077Z","steps":["trace[1208553513] 'process raft request'  (duration: 155.135072ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:16:31.529032Z","caller":"traceutil/trace.go:171","msg":"trace[505251683] linearizableReadLoop","detail":"{readStateIndex:2832; appliedIndex:2831; }","duration":"107.445309ms","start":"2024-04-16T17:16:31.421572Z","end":"2024-04-16T17:16:31.529017Z","steps":["trace[505251683] 'read index received'  (duration: 107.319103ms)","trace[505251683] 'applied index is now lower than readState.Index'  (duration: 125.606µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-16T17:16:31.529184Z","caller":"traceutil/trace.go:171","msg":"trace[359290184] transaction","detail":"{read_only:false; response_revision:2575; number_of_response:1; }","duration":"197.441024ms","start":"2024-04-16T17:16:31.331735Z","end":"2024-04-16T17:16:31.529176Z","steps":["trace[359290184] 'process raft request'  (duration: 197.196912ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:16:31.529431Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.83703ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1110"}
	{"level":"info","ts":"2024-04-16T17:16:31.52969Z","caller":"traceutil/trace.go:171","msg":"trace[1576069612] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2575; }","duration":"108.130545ms","start":"2024-04-16T17:16:31.421545Z","end":"2024-04-16T17:16:31.529676Z","steps":["trace[1576069612] 'agreement among raft nodes before linearized reading'  (duration: 107.801628ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:16:33.727834Z","caller":"traceutil/trace.go:171","msg":"trace[1449824028] transaction","detail":"{read_only:false; response_revision:2578; number_of_response:1; }","duration":"364.497189ms","start":"2024-04-16T17:16:33.363317Z","end":"2024-04-16T17:16:33.727815Z","steps":["trace[1449824028] 'process raft request'  (duration: 364.339681ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:16:33.729115Z","caller":"traceutil/trace.go:171","msg":"trace[948704194] linearizableReadLoop","detail":"{readStateIndex:2837; appliedIndex:2836; }","duration":"283.56914ms","start":"2024-04-16T17:16:33.445533Z","end":"2024-04-16T17:16:33.729102Z","steps":["trace[948704194] 'read index received'  (duration: 282.906606ms)","trace[948704194] 'applied index is now lower than readState.Index'  (duration: 662.034µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-16T17:16:33.72965Z","caller":"traceutil/trace.go:171","msg":"trace[1908879286] transaction","detail":"{read_only:false; response_revision:2579; number_of_response:1; }","duration":"291.495046ms","start":"2024-04-16T17:16:33.438143Z","end":"2024-04-16T17:16:33.729638Z","steps":["trace[1908879286] 'process raft request'  (duration: 290.677204ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:16:33.729668Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-16T17:16:33.363297Z","time spent":"364.643596ms","remote":"127.0.0.1:49456","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":25,"response count":0,"response size":38,"request content":"compare:<key:\"compact_rev_key\" version:3 > success:<request_put:<key:\"compact_rev_key\" value_size:4 >> failure:<request_range:<key:\"compact_rev_key\" > >"}
	{"level":"warn","ts":"2024-04-16T17:16:33.729962Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.040139ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:434"}
	{"level":"info","ts":"2024-04-16T17:16:33.73064Z","caller":"traceutil/trace.go:171","msg":"trace[1591257630] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:2579; }","duration":"186.677072ms","start":"2024-04-16T17:16:33.543885Z","end":"2024-04-16T17:16:33.730562Z","steps":["trace[1591257630] 'agreement among raft nodes before linearized reading'  (duration: 185.842129ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:16:33.730022Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"284.488987ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-16T17:16:33.731097Z","caller":"traceutil/trace.go:171","msg":"trace[339406949] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2579; }","duration":"285.581443ms","start":"2024-04-16T17:16:33.445505Z","end":"2024-04-16T17:16:33.731087Z","steps":["trace[339406949] 'agreement among raft nodes before linearized reading'  (duration: 284.501387ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:16:33.730066Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.750168ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1110"}
	{"level":"info","ts":"2024-04-16T17:16:33.731323Z","caller":"traceutil/trace.go:171","msg":"trace[1323315847] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2579; }","duration":"143.028733ms","start":"2024-04-16T17:16:33.588284Z","end":"2024-04-16T17:16:33.731313Z","steps":["trace[1323315847] 'agreement among raft nodes before linearized reading'  (duration: 141.746268ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:16:33.740796Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2041}
	{"level":"info","ts":"2024-04-16T17:16:33.745817Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2041,"took":"4.568334ms","hash":1427640317,"current-db-size-bytes":2433024,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1806336,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-04-16T17:16:33.746025Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1427640317,"revision":2041,"compact-revision":1502}
	{"level":"info","ts":"2024-04-16T17:16:40.98492Z","caller":"traceutil/trace.go:171","msg":"trace[2045382782] transaction","detail":"{read_only:false; response_revision:2627; number_of_response:1; }","duration":"150.576419ms","start":"2024-04-16T17:16:40.834317Z","end":"2024-04-16T17:16:40.984893Z","steps":["trace[2045382782] 'process raft request'  (duration: 150.385009ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:21:09 up 26 min,  0 users,  load average: 0.16, 0.25, 0.20
	Linux ha-022600 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [be245de9ef54] <==
	I0416 17:20:01.758198       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:20:11.769525       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:20:11.769631       1 main.go:227] handling current node
	I0416 17:20:11.769644       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:20:11.769651       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:20:21.780408       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:20:21.780509       1 main.go:227] handling current node
	I0416 17:20:21.780523       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:20:21.780532       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:20:31.786179       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:20:31.786296       1 main.go:227] handling current node
	I0416 17:20:31.786310       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:20:31.786320       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:20:41.798646       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:20:41.798832       1 main.go:227] handling current node
	I0416 17:20:41.799143       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:20:41.799281       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:20:51.811869       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:20:51.811908       1 main.go:227] handling current node
	I0416 17:20:51.811922       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:20:51.811930       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:21:01.820852       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:21:01.820986       1 main.go:227] handling current node
	I0416 17:21:01.821001       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:21:01.821009       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [4fd5df8c9fd3] <==
	I0416 16:56:35.510308       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0416 16:56:35.512679       1 aggregator.go:165] initial CRD sync complete...
	I0416 16:56:35.516211       1 autoregister_controller.go:141] Starting autoregister controller
	I0416 16:56:35.516249       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0416 16:56:35.516256       1 cache.go:39] Caches are synced for autoregister controller
	I0416 16:56:35.517473       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0416 16:56:35.522352       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0416 16:56:35.529558       1 controller.go:624] quota admission added evaluator for: namespaces
	I0416 16:56:35.542494       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0416 16:56:36.411016       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0416 16:56:36.418409       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0416 16:56:36.419376       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0416 16:56:37.172553       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0416 16:56:37.235069       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0416 16:56:37.370838       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0416 16:56:37.381797       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.81.207]
	I0416 16:56:37.383264       1 controller.go:624] quota admission added evaluator for: endpoints
	I0416 16:56:37.388718       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0416 16:56:37.435733       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0416 16:56:38.737496       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0416 16:56:38.764389       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0416 16:56:38.781093       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0416 16:56:51.466047       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0416 16:56:51.868826       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	http2: server: error reading preface from client 172.19.93.94:54156: read tcp 172.19.95.254:8443->172.19.93.94:54156: read: connection reset by peer
	
	
	==> kube-controller-manager [a7fb69539df6] <==
	I0416 16:57:04.995404       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="188.309µs"
	I0416 16:57:05.057328       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="96.005µs"
	I0416 16:57:05.964586       1 node_lifecycle_controller.go:1045] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0416 16:57:07.181900       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="167.009µs"
	I0416 16:57:07.224163       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="15.307781ms"
	I0416 16:57:07.224903       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="88.905µs"
	I0416 16:57:07.277301       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="19.898845ms"
	I0416 16:57:07.277810       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="53.303µs"
	I0416 17:00:45.709324       1 event.go:376] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7fdf7869d9 to 3"
	I0416 17:00:45.728545       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-rpfpf"
	I0416 17:00:45.745464       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-mnl84"
	I0416 17:00:45.756444       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-gph6r"
	I0416 17:00:45.770175       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="62.082711ms"
	I0416 17:00:45.784213       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="13.744211ms"
	I0416 17:00:45.810992       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="26.530372ms"
	I0416 17:00:45.811146       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="42.802µs"
	I0416 17:00:48.413892       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="8.465463ms"
	I0416 17:00:48.413981       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="40.302µs"
	I0416 17:16:37.436480       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-022600-m03\" does not exist"
	I0416 17:16:37.446130       1 range_allocator.go:380] "Set node PodCIDR" node="ha-022600-m03" podCIDRs=["10.244.1.0/24"]
	I0416 17:16:37.459239       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7c2px"
	I0416 17:16:37.461522       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ss5lp"
	I0416 17:16:41.186805       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-022600-m03"
	I0416 17:16:41.187824       1 event.go:376] "Event occurred" object="ha-022600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-022600-m03 event: Registered Node ha-022600-m03 in Controller"
	I0416 17:16:54.835196       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-022600-m03"
	
	
	==> kube-proxy [05db92f49e0d] <==
	I0416 16:56:54.468581       1 server_others.go:72] "Using iptables proxy"
	I0416 16:56:54.505964       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.81.207"]
	I0416 16:56:54.583838       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 16:56:54.584172       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 16:56:54.584273       1 server_others.go:168] "Using iptables Proxier"
	I0416 16:56:54.590060       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 16:56:54.590806       1 server.go:865] "Version info" version="v1.29.3"
	I0416 16:56:54.591014       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 16:56:54.592331       1 config.go:188] "Starting service config controller"
	I0416 16:56:54.592517       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 16:56:54.592625       1 config.go:97] "Starting endpoint slice config controller"
	I0416 16:56:54.592689       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 16:56:54.594058       1 config.go:315] "Starting node config controller"
	I0416 16:56:54.594215       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 16:56:54.693900       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 16:56:54.693964       1 shared_informer.go:318] Caches are synced for service config
	I0416 16:56:54.694328       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [e042d71e8b0e] <==
	W0416 16:56:36.501819       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0416 16:56:36.501922       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0416 16:56:36.507709       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0416 16:56:36.507948       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0416 16:56:36.573671       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 16:56:36.573877       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 16:56:36.602162       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 16:56:36.602205       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 16:56:36.621966       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.622272       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0416 16:56:36.648392       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0416 16:56:36.648623       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0416 16:56:36.694872       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 16:56:36.694970       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 16:56:36.804118       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.805424       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 16:56:36.821863       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.822231       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 16:56:36.866017       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 16:56:36.866298       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0416 16:56:36.904820       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 16:56:36.905097       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 16:56:36.917996       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.918036       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0416 16:56:39.298679       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 17:16:38 ha-022600 kubelet[2220]: E0416 17:16:38.994203    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:16:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:16:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:16:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:16:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:17:38 ha-022600 kubelet[2220]: E0416 17:17:38.995310    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:17:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:17:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:17:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:17:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:18:38 ha-022600 kubelet[2220]: E0416 17:18:38.994865    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:18:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:18:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:18:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:18:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:19:38 ha-022600 kubelet[2220]: E0416 17:19:38.994994    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:19:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:19:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:19:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:19:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:20:38 ha-022600 kubelet[2220]: E0416 17:20:38.994897    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:20:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:20:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:20:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:20:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 17:21:02.639169    3060 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-022600 -n ha-022600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-022600 -n ha-022600: (10.9744363s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-022600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7fdf7869d9-gph6r
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-022600 describe pod busybox-7fdf7869d9-gph6r
helpers_test.go:282: (dbg) kubectl --context ha-022600 describe pod busybox-7fdf7869d9-gph6r:

                                                
                                                
-- stdout --
	Name:             busybox-7fdf7869d9-gph6r
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7fdf7869d9
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7fdf7869d9
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h29q5 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-h29q5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  5m11s (x4 over 20m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  11s                  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (93.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (40.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (10.5984358s)
ha_test.go:413: expected profile "ha-022600" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-022600\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-022600\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperv\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\
":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-022600\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"172.19.95.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"172.19.81.207\",\"Port\":8443,\"Ku
bernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"172.19.80.125\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"172.19.93.94\",\"Port\":0,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\"
:false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"C:\\\\Users\\\\jenkins.minikube5:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations
\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-windows-amd64.exe profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-022600 -n ha-022600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-022600 -n ha-022600: (10.8469989s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-022600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-022600 logs -n 25: (7.4961763s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:10 UTC | 16 Apr 24 17:10 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:10 UTC | 16 Apr 24 17:10 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:11 UTC | 16 Apr 24 17:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:11 UTC | 16 Apr 24 17:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:11 UTC | 16 Apr 24 17:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:11 UTC | 16 Apr 24 17:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-gph6r --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-mnl84 --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | busybox-7fdf7869d9-rpfpf --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-gph6r --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-mnl84 --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | busybox-7fdf7869d9-rpfpf --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-gph6r -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-mnl84 -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | busybox-7fdf7869d9-rpfpf -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC | 16 Apr 24 17:13 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC |                     |
	|         | busybox-7fdf7869d9-gph6r             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC |                     |
	|         | busybox-7fdf7869d9-mnl84             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC | 16 Apr 24 17:13 UTC |
	|         | busybox-7fdf7869d9-rpfpf             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC |                     |
	|         | busybox-7fdf7869d9-rpfpf -- sh       |           |                   |                |                     |                     |
	|         | -c ping -c 1 172.19.80.1             |           |                   |                |                     |                     |
	| node    | add -p ha-022600 -v=7                | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC | 16 Apr 24 17:16 UTC |
	|         | --alsologtostderr                    |           |                   |                |                     |                     |
	| node    | ha-022600 node stop m02 -v=7         | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:19 UTC | 16 Apr 24 17:20 UTC |
	|         | --alsologtostderr                    |           |                   |                |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 16:53:50
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 16:53:50.116950   12816 out.go:291] Setting OutFile to fd 784 ...
	I0416 16:53:50.117952   12816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:53:50.117952   12816 out.go:304] Setting ErrFile to fd 696...
	I0416 16:53:50.117952   12816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:53:50.138920   12816 out.go:298] Setting JSON to false
	I0416 16:53:50.141501   12816 start.go:129] hostinfo: {"hostname":"minikube5","uptime":24059,"bootTime":1713262370,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0416 16:53:50.141501   12816 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0416 16:53:50.143700   12816 out.go:177] * [ha-022600] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0416 16:53:50.144387   12816 notify.go:220] Checking for updates...
	I0416 16:53:50.144982   12816 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:53:50.145881   12816 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 16:53:50.146373   12816 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0416 16:53:50.146987   12816 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 16:53:50.147788   12816 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 16:53:50.149250   12816 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 16:53:54.959514   12816 out.go:177] * Using the hyperv driver based on user configuration
	I0416 16:53:54.959811   12816 start.go:297] selected driver: hyperv
	I0416 16:53:54.959811   12816 start.go:901] validating driver "hyperv" against <nil>
	I0416 16:53:54.959811   12816 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 16:53:55.002641   12816 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 16:53:55.003374   12816 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 16:53:55.003816   12816 cni.go:84] Creating CNI manager for ""
	I0416 16:53:55.003816   12816 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0416 16:53:55.003816   12816 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0416 16:53:55.003816   12816 start.go:340] cluster config:
	{Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:53:55.003816   12816 iso.go:125] acquiring lock: {Name:mka3f8eef32f5becd06d05d1d837c2a92a8fa70c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 16:53:55.005700   12816 out.go:177] * Starting "ha-022600" primary control-plane node in "ha-022600" cluster
	I0416 16:53:55.006053   12816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:53:55.006397   12816 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0416 16:53:55.006397   12816 cache.go:56] Caching tarball of preloaded images
	I0416 16:53:55.006539   12816 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 16:53:55.006809   12816 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 16:53:55.007075   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:53:55.007821   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json: {Name:mkc2f9747189bfa0db5ea21e93e1afafc0e89eb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:53:55.008149   12816 start.go:360] acquireMachinesLock for ha-022600: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:53:55.009151   12816 start.go:364] duration metric: took 1.0024ms to acquireMachinesLock for "ha-022600"
	I0416 16:53:55.009151   12816 start.go:93] Provisioning new machine with config: &{Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 16:53:55.009151   12816 start.go:125] createHost starting for "" (driver="hyperv")
	I0416 16:53:55.010175   12816 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 16:53:55.010397   12816 start.go:159] libmachine.API.Create for "ha-022600" (driver="hyperv")
	I0416 16:53:55.010397   12816 client.go:168] LocalClient.Create starting
	I0416 16:53:55.010740   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0416 16:53:55.011023   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:53:55.011023   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:53:55.011200   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0416 16:53:55.011403   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:53:55.011403   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:53:55.011541   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0416 16:53:56.852843   12816 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0416 16:53:56.852843   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:53:56.853713   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0416 16:53:58.346838   12816 main.go:141] libmachine: [stdout =====>] : False
	
	I0416 16:53:58.346838   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:53:58.347399   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:53:59.667129   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:53:59.667129   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:53:59.667644   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:54:02.789332   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:54:02.789332   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:02.791736   12816 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 16:54:03.131710   12816 main.go:141] libmachine: Creating SSH key...
	I0416 16:54:03.273248   12816 main.go:141] libmachine: Creating VM...
	I0416 16:54:03.273248   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:54:05.824835   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:54:05.824835   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:05.824937   12816 main.go:141] libmachine: Using switch "Default Switch"
	I0416 16:54:05.825022   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:54:07.398351   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:54:07.398635   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:07.398635   12816 main.go:141] libmachine: Creating VHD
	I0416 16:54:07.398733   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0416 16:54:10.982944   12816 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : E9EB5342-E929-43B6-8B97-D7BDD354CEE1
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0416 16:54:10.983213   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:10.983213   12816 main.go:141] libmachine: Writing magic tar header
	I0416 16:54:10.983213   12816 main.go:141] libmachine: Writing SSH key tar header
	I0416 16:54:10.992883   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0416 16:54:13.950584   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:13.950584   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:13.950584   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\disk.vhd' -SizeBytes 20000MB
	I0416 16:54:16.287736   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:16.287736   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:16.287736   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-022600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0416 16:54:19.439740   12816 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-022600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0416 16:54:19.439740   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:19.439740   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-022600 -DynamicMemoryEnabled $false
	I0416 16:54:21.396684   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:21.396684   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:21.397696   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-022600 -Count 2
	I0416 16:54:23.301369   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:23.301369   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:23.302296   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-022600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\boot2docker.iso'
	I0416 16:54:25.540957   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:25.540957   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:25.541060   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-022600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\disk.vhd'
	I0416 16:54:27.919093   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:27.919302   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:27.919302   12816 main.go:141] libmachine: Starting VM...
	I0416 16:54:27.919462   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-022600
	I0416 16:54:30.480037   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:30.480279   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:30.480279   12816 main.go:141] libmachine: Waiting for host to start...
	I0416 16:54:30.480279   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:32.483346   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:32.484152   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:32.484152   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:34.785082   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:34.785271   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:35.799483   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:37.788691   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:37.788691   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:37.788898   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:40.058231   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:40.058742   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:41.064074   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:43.063862   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:43.063862   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:43.063862   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:45.301253   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:45.301420   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:46.309647   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:48.337653   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:48.337653   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:48.337653   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:50.614494   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:50.615195   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:51.620909   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:53.639317   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:53.640351   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:53.640405   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:55.942630   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:54:55.943393   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:55.943471   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:57.836545   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:57.836545   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:57.837395   12816 machine.go:94] provisionDockerMachine start ...
	I0416 16:54:57.837474   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:59.762683   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:59.763360   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:59.763440   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:02.003751   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:02.003751   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:02.010689   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:02.023158   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:02.023158   12816 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 16:55:02.152140   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 16:55:02.152244   12816 buildroot.go:166] provisioning hostname "ha-022600"
	I0416 16:55:02.152322   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:03.956913   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:03.956913   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:03.957618   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:06.305236   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:06.305236   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:06.309822   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:06.310484   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:06.310484   12816 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-022600 && echo "ha-022600" | sudo tee /etc/hostname
	I0416 16:55:06.479074   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-022600
	
	I0416 16:55:06.479182   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:08.433073   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:08.433999   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:08.433999   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:10.792893   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:10.792893   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:10.796713   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:10.797321   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:10.797321   12816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-022600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-022600/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-022600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:55:10.944702   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:55:10.944870   12816 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 16:55:10.944983   12816 buildroot.go:174] setting up certificates
	I0416 16:55:10.944983   12816 provision.go:84] configureAuth start
	I0416 16:55:10.945092   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:12.932736   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:12.932736   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:12.933614   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:15.203758   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:15.203758   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:15.203758   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:17.088226   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:17.088226   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:17.088334   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:19.325791   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:19.326294   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:19.326294   12816 provision.go:143] copyHostCerts
	I0416 16:55:19.326294   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 16:55:19.326294   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 16:55:19.326294   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 16:55:19.326900   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 16:55:19.328097   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 16:55:19.328097   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 16:55:19.328097   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 16:55:19.328097   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 16:55:19.329417   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 16:55:19.329417   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 16:55:19.329417   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 16:55:19.330063   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 16:55:19.330726   12816 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-022600 san=[127.0.0.1 172.19.81.207 ha-022600 localhost minikube]
	I0416 16:55:19.539117   12816 provision.go:177] copyRemoteCerts
	I0416 16:55:19.547114   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:55:19.547114   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:21.440985   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:21.440985   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:21.440985   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:23.726564   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:23.726564   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:23.727019   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:55:23.834423   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.287066s)
	I0416 16:55:23.834577   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 16:55:23.835008   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 16:55:23.874966   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 16:55:23.875470   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0416 16:55:23.923921   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 16:55:23.923921   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 16:55:23.965042   12816 provision.go:87] duration metric: took 13.0192422s to configureAuth
	I0416 16:55:23.965042   12816 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:55:23.965741   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:55:23.965827   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:25.905339   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:25.905903   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:25.905986   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:28.170079   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:28.170419   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:28.173356   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:28.173937   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:28.173937   12816 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 16:55:28.301727   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 16:55:28.301727   12816 buildroot.go:70] root file system type: tmpfs
	I0416 16:55:28.302425   12816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 16:55:28.302506   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:30.181808   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:30.181808   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:30.181889   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:32.394860   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:32.394860   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:32.398667   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:32.399299   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:32.399475   12816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 16:55:32.556658   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 16:55:32.556887   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:34.446928   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:34.446969   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:34.447053   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:36.709442   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:36.710242   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:36.714111   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:36.714437   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:36.714437   12816 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 16:55:38.655929   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 16:55:38.655929   12816 machine.go:97] duration metric: took 40.8162201s to provisionDockerMachine
	I0416 16:55:38.656036   12816 client.go:171] duration metric: took 1m43.6397622s to LocalClient.Create
	I0416 16:55:38.656036   12816 start.go:167] duration metric: took 1m43.6397622s to libmachine.API.Create "ha-022600"
	I0416 16:55:38.656036   12816 start.go:293] postStartSetup for "ha-022600" (driver="hyperv")
	I0416 16:55:38.656036   12816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:55:38.665072   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:55:38.665072   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:40.514910   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:40.514910   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:40.515910   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:42.764754   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:42.765404   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:42.765404   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:55:42.879399   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2140881s)
	I0416 16:55:42.892410   12816 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:55:42.899117   12816 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:55:42.899117   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 16:55:42.899734   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 16:55:42.901086   12816 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 16:55:42.901154   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 16:55:42.911237   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 16:55:42.927664   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 16:55:42.975440   12816 start.go:296] duration metric: took 4.3191592s for postStartSetup
	I0416 16:55:42.977201   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:44.830945   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:44.830945   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:44.831562   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:47.134349   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:47.134788   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:47.135000   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:55:47.137270   12816 start.go:128] duration metric: took 1m52.1217609s to createHost
	I0416 16:55:47.137270   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:49.024055   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:49.024055   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:49.024657   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:51.238446   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:51.238526   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:51.242455   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:51.243052   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:51.243052   12816 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 16:55:51.369469   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713286551.540248133
	
	I0416 16:55:51.369575   12816 fix.go:216] guest clock: 1713286551.540248133
	I0416 16:55:51.369575   12816 fix.go:229] Guest: 2024-04-16 16:55:51.540248133 +0000 UTC Remote: 2024-04-16 16:55:47.1372703 +0000 UTC m=+117.146546101 (delta=4.402977833s)
	I0416 16:55:51.369790   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:53.407581   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:53.407727   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:53.407727   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:55.663769   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:55.663769   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:55.667543   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:55.667688   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:55.667688   12816 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713286551
	I0416 16:55:55.810591   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 16:55:51 UTC 2024
	
	I0416 16:55:55.810700   12816 fix.go:236] clock set: Tue Apr 16 16:55:51 UTC 2024
	 (err=<nil>)
	I0416 16:55:55.810700   12816 start.go:83] releasing machines lock for "ha-022600", held for 2m0.7946995s
	I0416 16:55:55.810965   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:57.710878   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:57.710878   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:57.711672   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:59.985139   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:59.985210   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:59.988730   12816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:55:59.988803   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:59.998550   12816 ssh_runner.go:195] Run: cat /version.json
	I0416 16:55:59.998550   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:01.993954   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:01.993954   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:01.993954   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:01.995788   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:01.995959   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:01.996084   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:04.379274   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:56:04.379356   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:04.379701   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:56:04.391360   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:56:04.392161   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:04.392520   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:56:04.469159   12816 ssh_runner.go:235] Completed: cat /version.json: (4.4703555s)
	I0416 16:56:04.479363   12816 ssh_runner.go:195] Run: systemctl --version
	I0416 16:56:04.584079   12816 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5950892s)
	I0416 16:56:04.593130   12816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 16:56:04.602217   12816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:56:04.610705   12816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:56:04.639084   12816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 16:56:04.639119   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:56:04.639119   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:56:04.684127   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 16:56:04.713899   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 16:56:04.734297   12816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 16:56:04.745020   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 16:56:04.776657   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:56:04.806087   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 16:56:04.854166   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:56:04.890388   12816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:56:04.918140   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 16:56:04.946595   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 16:56:04.975408   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 16:56:05.001633   12816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:56:05.028505   12816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:56:05.053299   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:05.230466   12816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 16:56:05.260161   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:56:05.269988   12816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 16:56:05.302694   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:56:05.335619   12816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 16:56:05.368663   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:56:05.402792   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:56:05.435612   12816 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 16:56:05.483431   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:56:05.505797   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:56:05.548843   12816 ssh_runner.go:195] Run: which cri-dockerd
	I0416 16:56:05.563980   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 16:56:05.582552   12816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 16:56:05.624048   12816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 16:56:05.804495   12816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 16:56:05.984936   12816 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 16:56:05.985183   12816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 16:56:06.032244   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:06.217075   12816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 16:56:08.662995   12816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4457805s)
	I0416 16:56:08.670977   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0416 16:56:08.701542   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 16:56:08.730698   12816 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0416 16:56:08.941813   12816 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0416 16:56:09.145939   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:09.331138   12816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0416 16:56:09.370232   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 16:56:09.409657   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:09.615575   12816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0416 16:56:09.726879   12816 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0416 16:56:09.737760   12816 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0416 16:56:09.746450   12816 start.go:562] Will wait 60s for crictl version
	I0416 16:56:09.755840   12816 ssh_runner.go:195] Run: which crictl
	I0416 16:56:09.771470   12816 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 16:56:09.827603   12816 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0416 16:56:09.836477   12816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 16:56:09.874967   12816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 16:56:09.907967   12816 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
	I0416 16:56:09.908249   12816 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:6f:a4 Flags:up|broadcast|multicast|running}
	I0416 16:56:09.917049   12816 ip.go:210] interface addr: fe80::6b96:eca7:5afa:def5/64
	I0416 16:56:09.917049   12816 ip.go:210] interface addr: 172.19.80.1/20
	I0416 16:56:09.924842   12816 ssh_runner.go:195] Run: grep 172.19.80.1	host.minikube.internal$ /etc/hosts
	I0416 16:56:09.931830   12816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:56:09.968931   12816 kubeadm.go:877] updating cluster {Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 16:56:09.968931   12816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:56:09.975955   12816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 16:56:09.996899   12816 docker.go:685] Got preloaded images: 
	I0416 16:56:09.996899   12816 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0416 16:56:10.008276   12816 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0416 16:56:10.035609   12816 ssh_runner.go:195] Run: which lz4
	I0416 16:56:10.042582   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0416 16:56:10.050849   12816 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 16:56:10.058074   12816 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 16:56:10.058074   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0416 16:56:11.721910   12816 docker.go:649] duration metric: took 1.6789563s to copy over tarball
	I0416 16:56:11.731181   12816 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 16:56:20.333529   12816 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.60186s)
	I0416 16:56:20.333529   12816 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 16:56:20.400516   12816 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0416 16:56:20.419486   12816 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0416 16:56:20.469018   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:20.655543   12816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 16:56:23.229259   12816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5734984s)
	I0416 16:56:23.240705   12816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 16:56:23.262332   12816 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0416 16:56:23.262383   12816 cache_images.go:84] Images are preloaded, skipping loading
	I0416 16:56:23.262383   12816 kubeadm.go:928] updating node { 172.19.81.207 8443 v1.29.3 docker true true} ...
	I0416 16:56:23.262383   12816 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-022600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.81.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 16:56:23.270008   12816 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0416 16:56:23.307277   12816 cni.go:84] Creating CNI manager for ""
	I0416 16:56:23.307277   12816 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0416 16:56:23.307362   12816 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 16:56:23.307406   12816 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.81.207 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-022600 NodeName:ha-022600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.81.207"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.81.207 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 16:56:23.307691   12816 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.81.207
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-022600"
	  kubeletExtraArgs:
	    node-ip: 172.19.81.207
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.81.207"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 16:56:23.307749   12816 kube-vip.go:111] generating kube-vip config ...
	I0416 16:56:23.318492   12816 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0416 16:56:23.343950   12816 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0416 16:56:23.344258   12816 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.95.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0416 16:56:23.353585   12816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 16:56:23.370542   12816 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 16:56:23.379813   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0416 16:56:23.397865   12816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0416 16:56:23.432291   12816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 16:56:23.462868   12816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0416 16:56:23.492579   12816 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1351 bytes)
	I0416 16:56:23.534977   12816 ssh_runner.go:195] Run: grep 172.19.95.254	control-plane.minikube.internal$ /etc/hosts
	I0416 16:56:23.542734   12816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.95.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:56:23.575719   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:23.754395   12816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 16:56:23.781462   12816 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600 for IP: 172.19.81.207
	I0416 16:56:23.781462   12816 certs.go:194] generating shared ca certs ...
	I0416 16:56:23.781462   12816 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:23.782411   12816 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0416 16:56:23.782411   12816 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0416 16:56:23.783651   12816 certs.go:256] generating profile certs ...
	I0416 16:56:23.784402   12816 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.key
	I0416 16:56:23.784569   12816 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.crt with IP's: []
	I0416 16:56:23.984047   12816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.crt ...
	I0416 16:56:23.984047   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.crt: {Name:mk3ebdcb7f076a09a399313f7ed3edf14403a6a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:23.985977   12816 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.key ...
	I0416 16:56:23.985977   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.key: {Name:mk94343a485b04f4b25a0ccd3245e197e7ecbec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:23.986215   12816 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648
	I0416 16:56:23.987265   12816 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.81.207 172.19.95.254]
	I0416 16:56:24.317716   12816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648 ...
	I0416 16:56:24.317716   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648: {Name:mk30f7000427979a1bcf8d6fc3995d1f7ccc170c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.319797   12816 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648 ...
	I0416 16:56:24.319797   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648: {Name:mk95e9e3e0f96031ef005f6c36470c216303a111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.320163   12816 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt
	I0416 16:56:24.331288   12816 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key
	I0416 16:56:24.332214   12816 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key
	I0416 16:56:24.332214   12816 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt with IP's: []
	I0416 16:56:24.406574   12816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt ...
	I0416 16:56:24.406574   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt: {Name:mk73158a02cd8861e90a3b76d50704b360c358ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.407013   12816 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key ...
	I0416 16:56:24.407013   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key: {Name:mk6842e2af8fadaf278ec7592edd5bd96f07c8ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.408078   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 16:56:24.408945   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0416 16:56:24.409732   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0416 16:56:24.417870   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0416 16:56:24.418145   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem (1338 bytes)
	W0416 16:56:24.418533   12816 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324_empty.pem, impossibly tiny 0 bytes
	I0416 16:56:24.418533   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0416 16:56:24.418811   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0416 16:56:24.418990   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0416 16:56:24.419161   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0416 16:56:24.419368   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem (1708 bytes)
	I0416 16:56:24.419647   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /usr/share/ca-certificates/53242.pem
	I0416 16:56:24.419767   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:24.419867   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem -> /usr/share/ca-certificates/5324.pem
	I0416 16:56:24.420003   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 16:56:24.466985   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 16:56:24.509816   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 16:56:24.554817   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 16:56:24.603006   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0416 16:56:24.646596   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 16:56:24.694120   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 16:56:24.741669   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 16:56:24.785888   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /usr/share/ca-certificates/53242.pem (1708 bytes)
	I0416 16:56:24.829403   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 16:56:24.891821   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem --> /usr/share/ca-certificates/5324.pem (1338 bytes)
	I0416 16:56:24.933883   12816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 16:56:24.975091   12816 ssh_runner.go:195] Run: openssl version
	I0416 16:56:24.994129   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/53242.pem && ln -fs /usr/share/ca-certificates/53242.pem /etc/ssl/certs/53242.pem"
	I0416 16:56:25.021821   12816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/53242.pem
	I0416 16:56:25.028512   12816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:35 /usr/share/ca-certificates/53242.pem
	I0416 16:56:25.037989   12816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/53242.pem
	I0416 16:56:25.054924   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/53242.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 16:56:25.080011   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 16:56:25.106815   12816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:25.113980   12816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:25.126339   12816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:25.144599   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 16:56:25.170309   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5324.pem && ln -fs /usr/share/ca-certificates/5324.pem /etc/ssl/certs/5324.pem"
	I0416 16:56:25.199080   12816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5324.pem
	I0416 16:56:25.206080   12816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:35 /usr/share/ca-certificates/5324.pem
	I0416 16:56:25.214031   12816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5324.pem
	I0416 16:56:25.237026   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5324.pem /etc/ssl/certs/51391683.0"
	I0416 16:56:25.266837   12816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 16:56:25.273408   12816 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 16:56:25.273858   12816 kubeadm.go:391] StartCluster: {Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clu
sterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:56:25.281991   12816 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0416 16:56:25.314891   12816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0416 16:56:25.342248   12816 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 16:56:25.368032   12816 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 16:56:25.385737   12816 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 16:56:25.385737   12816 kubeadm.go:156] found existing configuration files:
	
	I0416 16:56:25.393851   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 16:56:25.410393   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 16:56:25.421874   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 16:56:25.453762   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 16:56:25.468769   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 16:56:25.477353   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 16:56:25.501898   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 16:56:25.515888   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 16:56:25.524885   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 16:56:25.548518   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 16:56:25.563660   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 16:56:25.572269   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 16:56:25.587981   12816 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 16:56:25.791977   12816 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 16:56:25.791977   12816 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 16:56:25.958638   12816 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 16:56:25.959035   12816 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 16:56:25.959403   12816 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 16:56:26.228464   12816 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 16:56:26.229544   12816 out.go:204]   - Generating certificates and keys ...
	I0416 16:56:26.229862   12816 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 16:56:26.230882   12816 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 16:56:26.359024   12816 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0416 16:56:26.583044   12816 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0416 16:56:26.715543   12816 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0416 16:56:27.014892   12816 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0416 16:56:27.414264   12816 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0416 16:56:27.414467   12816 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-022600 localhost] and IPs [172.19.81.207 127.0.0.1 ::1]
	I0416 16:56:27.642396   12816 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0416 16:56:27.642770   12816 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-022600 localhost] and IPs [172.19.81.207 127.0.0.1 ::1]
	I0416 16:56:27.844566   12816 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0416 16:56:28.089475   12816 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0416 16:56:28.543900   12816 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0416 16:56:28.548586   12816 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 16:56:29.051829   12816 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 16:56:29.485679   12816 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 16:56:29.830737   12816 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 16:56:30.055972   12816 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 16:56:30.305118   12816 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 16:56:30.305118   12816 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 16:56:30.310446   12816 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 16:56:30.311113   12816 out.go:204]   - Booting up control plane ...
	I0416 16:56:30.311289   12816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 16:56:30.311970   12816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 16:56:30.317049   12816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 16:56:30.342443   12816 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 16:56:30.345140   12816 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 16:56:30.345140   12816 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 16:56:30.526725   12816 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 16:56:37.142045   12816 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.615653 seconds
	I0416 16:56:37.159025   12816 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 16:56:37.175108   12816 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 16:56:37.707867   12816 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 16:56:37.708715   12816 kubeadm.go:309] [mark-control-plane] Marking the node ha-022600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 16:56:38.222729   12816 kubeadm.go:309] [bootstrap-token] Using token: a3r5qn.ikva200bfcppykg5
	I0416 16:56:38.223819   12816 out.go:204]   - Configuring RBAC rules ...
	I0416 16:56:38.224231   12816 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 16:56:38.232416   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 16:56:38.244982   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 16:56:38.249926   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 16:56:38.257723   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 16:56:38.262029   12816 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 16:56:38.279883   12816 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 16:56:38.592701   12816 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 16:56:38.638273   12816 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 16:56:38.639572   12816 kubeadm.go:309] 
	I0416 16:56:38.640154   12816 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 16:56:38.640230   12816 kubeadm.go:309] 
	I0416 16:56:38.640982   12816 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 16:56:38.641038   12816 kubeadm.go:309] 
	I0416 16:56:38.641299   12816 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 16:56:38.641581   12816 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 16:56:38.641765   12816 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 16:56:38.641765   12816 kubeadm.go:309] 
	I0416 16:56:38.641989   12816 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 16:56:38.642031   12816 kubeadm.go:309] 
	I0416 16:56:38.642184   12816 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 16:56:38.642228   12816 kubeadm.go:309] 
	I0416 16:56:38.642350   12816 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 16:56:38.642660   12816 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 16:56:38.642862   12816 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 16:56:38.642900   12816 kubeadm.go:309] 
	I0416 16:56:38.643166   12816 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 16:56:38.643426   12816 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 16:56:38.643426   12816 kubeadm.go:309] 
	I0416 16:56:38.643613   12816 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token a3r5qn.ikva200bfcppykg5 \
	I0416 16:56:38.643867   12816 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c \
	I0416 16:56:38.643909   12816 kubeadm.go:309] 	--control-plane 
	I0416 16:56:38.643961   12816 kubeadm.go:309] 
	I0416 16:56:38.644233   12816 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 16:56:38.644272   12816 kubeadm.go:309] 
	I0416 16:56:38.644444   12816 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token a3r5qn.ikva200bfcppykg5 \
	I0416 16:56:38.644734   12816 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c 
	I0416 16:56:38.647455   12816 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 16:56:38.647488   12816 cni.go:84] Creating CNI manager for ""
	I0416 16:56:38.647539   12816 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0416 16:56:38.648246   12816 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0416 16:56:38.657141   12816 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0416 16:56:38.671263   12816 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0416 16:56:38.671263   12816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0416 16:56:38.722410   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0416 16:56:39.265655   12816 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 16:56:39.279279   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:39.279279   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-022600 minikube.k8s.io/updated_at=2024_04_16T16_56_39_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4 minikube.k8s.io/name=ha-022600 minikube.k8s.io/primary=true
	I0416 16:56:39.290244   12816 ops.go:34] apiserver oom_adj: -16
	I0416 16:56:39.441163   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:39.950155   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:40.453751   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:40.955147   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:41.455931   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:41.953044   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:42.454696   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:42.949299   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:43.454962   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:43.953447   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:44.456402   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:44.956686   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:45.449476   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:45.951602   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:46.451988   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:46.949212   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:47.449356   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:47.950703   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:48.458777   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:48.956811   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:49.456669   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:49.943595   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:50.443906   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:50.950503   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:51.454863   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:51.944285   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:52.083562   12816 kubeadm.go:1107] duration metric: took 12.8170858s to wait for elevateKubeSystemPrivileges
	W0416 16:56:52.083816   12816 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 16:56:52.083816   12816 kubeadm.go:393] duration metric: took 26.808438s to StartCluster
	I0416 16:56:52.083816   12816 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:52.084214   12816 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:56:52.086643   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:52.088384   12816 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 16:56:52.088384   12816 start.go:240] waiting for startup goroutines ...
	I0416 16:56:52.088384   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0416 16:56:52.088384   12816 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 16:56:52.088630   12816 addons.go:69] Setting storage-provisioner=true in profile "ha-022600"
	I0416 16:56:52.088732   12816 addons.go:234] Setting addon storage-provisioner=true in "ha-022600"
	I0416 16:56:52.088732   12816 addons.go:69] Setting default-storageclass=true in profile "ha-022600"
	I0416 16:56:52.088850   12816 host.go:66] Checking if "ha-022600" exists ...
	I0416 16:56:52.088964   12816 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-022600"
	I0416 16:56:52.088964   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:56:52.090289   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:52.090671   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:52.207597   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.80.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0416 16:56:52.469504   12816 start.go:946] {"host.minikube.internal": 172.19.80.1} host record injected into CoreDNS's ConfigMap
	I0416 16:56:54.164683   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:54.164683   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:54.165583   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:54.165635   12816 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:56:54.165635   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:54.166734   12816 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 16:56:54.166340   12816 kapi.go:59] client config for ha-022600: &rest.Config{Host:"https://172.19.95.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-022600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-022600\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef16c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 16:56:54.167133   12816 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 16:56:54.167133   12816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 16:56:54.167133   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:54.167791   12816 cert_rotation.go:137] Starting client certificate rotation controller
	I0416 16:56:54.168180   12816 addons.go:234] Setting addon default-storageclass=true in "ha-022600"
	I0416 16:56:54.168347   12816 host.go:66] Checking if "ha-022600" exists ...
	I0416 16:56:54.169251   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:56.312581   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:56.312988   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:56.313046   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:56.313270   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:56.313270   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:56.313270   12816 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 16:56:56.313270   12816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 16:56:56.313270   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:58.330392   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:58.330392   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:58.330966   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:58.735727   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:56:58.735876   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:58.736103   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:56:58.898469   12816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 16:57:00.675802   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:57:00.675802   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:00.676245   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:57:00.828151   12816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 16:57:01.248041   12816 round_trippers.go:463] GET https://172.19.95.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0416 16:57:01.248041   12816 round_trippers.go:469] Request Headers:
	I0416 16:57:01.248041   12816 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:57:01.248041   12816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:57:01.261890   12816 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0416 16:57:01.262478   12816 round_trippers.go:463] PUT https://172.19.95.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0416 16:57:01.262478   12816 round_trippers.go:469] Request Headers:
	I0416 16:57:01.262478   12816 round_trippers.go:473]     Content-Type: application/json
	I0416 16:57:01.262478   12816 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:57:01.262478   12816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:57:01.268964   12816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 16:57:01.269995   12816 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0416 16:57:01.270495   12816 addons.go:505] duration metric: took 9.181591s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0416 16:57:01.270576   12816 start.go:245] waiting for cluster config update ...
	I0416 16:57:01.270618   12816 start.go:254] writing updated cluster config ...
	I0416 16:57:01.271859   12816 out.go:177] 
	I0416 16:57:01.284169   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:57:01.284169   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:57:01.285951   12816 out.go:177] * Starting "ha-022600-m02" control-plane node in "ha-022600" cluster
	I0416 16:57:01.286952   12816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:57:01.286952   12816 cache.go:56] Caching tarball of preloaded images
	I0416 16:57:01.286952   12816 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 16:57:01.286952   12816 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 16:57:01.286952   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:57:01.296247   12816 start.go:360] acquireMachinesLock for ha-022600-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:57:01.297324   12816 start.go:364] duration metric: took 1.0773ms to acquireMachinesLock for "ha-022600-m02"
	I0416 16:57:01.297559   12816 start.go:93] Provisioning new machine with config: &{Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 16:57:01.297559   12816 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0416 16:57:01.297559   12816 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 16:57:01.297559   12816 start.go:159] libmachine.API.Create for "ha-022600" (driver="hyperv")
	I0416 16:57:01.297559   12816 client.go:168] LocalClient.Create starting
	I0416 16:57:01.298838   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0416 16:57:01.299147   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:57:01.299147   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:57:01.299293   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0416 16:57:01.299468   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:57:01.299468   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:57:01.299468   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0416 16:57:03.017072   12816 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0416 16:57:03.017279   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:03.017366   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0416 16:57:04.580895   12816 main.go:141] libmachine: [stdout =====>] : False
	
	I0416 16:57:04.580895   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:04.580895   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:57:05.984295   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:57:05.984295   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:05.984295   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:57:09.314760   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:57:09.314760   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:09.316740   12816 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 16:57:09.669552   12816 main.go:141] libmachine: Creating SSH key...
	I0416 16:57:10.010472   12816 main.go:141] libmachine: Creating VM...
	I0416 16:57:10.010472   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:57:12.690022   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:57:12.690022   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:12.690107   12816 main.go:141] libmachine: Using switch "Default Switch"
	I0416 16:57:12.690185   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:57:14.267157   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:57:14.267157   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:14.267157   12816 main.go:141] libmachine: Creating VHD
	I0416 16:57:14.267157   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0416 16:57:17.749511   12816 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : FE960248-03C1-43D6-B7AE-A60D4C86308B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0416 16:57:17.749511   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:17.749511   12816 main.go:141] libmachine: Writing magic tar header
	I0416 16:57:17.749511   12816 main.go:141] libmachine: Writing SSH key tar header
	I0416 16:57:17.758158   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0416 16:57:20.709379   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:20.709950   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:20.710019   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\disk.vhd' -SizeBytes 20000MB
	I0416 16:57:23.025729   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:23.025729   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:23.025729   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-022600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0416 16:57:26.131923   12816 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-022600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0416 16:57:26.131923   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:26.131923   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-022600-m02 -DynamicMemoryEnabled $false
	I0416 16:57:28.159153   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:28.159229   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:28.159409   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-022600-m02 -Count 2
	I0416 16:57:30.126033   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:30.126033   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:30.126033   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-022600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\boot2docker.iso'
	I0416 16:57:32.420739   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:32.421735   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:32.421878   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-022600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\disk.vhd'
	I0416 16:57:34.779822   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:34.779822   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:34.779822   12816 main.go:141] libmachine: Starting VM...
	I0416 16:57:34.780971   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-022600-m02
	I0416 16:57:37.369505   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:37.369687   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:37.369687   12816 main.go:141] libmachine: Waiting for host to start...
	I0416 16:57:37.369767   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:39.415029   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:39.415029   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:39.415286   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:41.685132   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:41.685132   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:42.700464   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:44.674039   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:44.674039   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:44.674039   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:46.993492   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:46.993492   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:48.000886   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:49.992438   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:49.992894   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:49.992930   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:52.274971   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:52.274971   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:53.290891   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:55.287716   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:55.287962   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:55.288037   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:57.564053   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:57.564053   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:58.572803   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:00.584542   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:00.584542   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:00.584542   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:02.905327   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:02.905327   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:02.905391   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:04.899133   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:04.899133   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:04.899479   12816 machine.go:94] provisionDockerMachine start ...
	I0416 16:58:04.899479   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:06.914221   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:06.914221   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:06.914869   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:09.273511   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:09.273546   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:09.277783   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:09.278406   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:09.278406   12816 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 16:58:09.413281   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 16:58:09.413281   12816 buildroot.go:166] provisioning hostname "ha-022600-m02"
	I0416 16:58:09.413281   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:11.438626   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:11.438626   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:11.439079   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:13.801181   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:13.801181   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:13.805295   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:13.805684   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:13.805684   12816 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-022600-m02 && echo "ha-022600-m02" | sudo tee /etc/hostname
	I0416 16:58:13.957933   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-022600-m02
	
	I0416 16:58:13.958021   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:15.863768   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:15.863768   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:15.863768   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:18.176996   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:18.178002   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:18.182057   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:18.182681   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:18.182681   12816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-022600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-022600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-022600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:58:18.315751   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:58:18.315853   12816 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 16:58:18.315853   12816 buildroot.go:174] setting up certificates
	I0416 16:58:18.315853   12816 provision.go:84] configureAuth start
	I0416 16:58:18.315853   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:20.243862   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:20.243928   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:20.243928   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:22.525833   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:22.525945   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:22.526057   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:24.418671   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:24.418671   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:24.418894   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:26.735560   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:26.735560   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:26.735560   12816 provision.go:143] copyHostCerts
	I0416 16:58:26.736546   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 16:58:26.736627   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 16:58:26.736627   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 16:58:26.736627   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 16:58:26.737290   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 16:58:26.737900   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 16:58:26.737900   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 16:58:26.738191   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 16:58:26.738908   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 16:58:26.738977   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 16:58:26.738977   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 16:58:26.738977   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 16:58:26.739652   12816 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-022600-m02 san=[127.0.0.1 172.19.80.125 ha-022600-m02 localhost minikube]
	I0416 16:58:26.917277   12816 provision.go:177] copyRemoteCerts
	I0416 16:58:26.926308   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:58:26.926600   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:28.829360   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:28.829360   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:28.830343   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:31.113681   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:31.113681   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:31.113681   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:58:31.229222   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3026703s)
	I0416 16:58:31.229222   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 16:58:31.229700   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 16:58:31.279666   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 16:58:31.280307   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0416 16:58:31.328101   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 16:58:31.328245   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 16:58:31.382563   12816 provision.go:87] duration metric: took 13.065969s to configureAuth
	I0416 16:58:31.382563   12816 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:58:31.383343   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:58:31.383343   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:33.331199   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:33.331199   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:33.331275   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:35.653673   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:35.653721   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:35.656855   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:35.657430   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:35.657430   12816 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 16:58:35.781565   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 16:58:35.781565   12816 buildroot.go:70] root file system type: tmpfs
	I0416 16:58:35.781565   12816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 16:58:35.782090   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:37.695478   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:37.695478   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:37.696344   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:39.956169   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:39.956169   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:39.961057   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:39.961515   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:39.961564   12816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.81.207"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 16:58:40.123664   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.81.207
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 16:58:40.123818   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:42.064878   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:42.064974   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:42.065152   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:44.326252   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:44.326252   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:44.330103   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:44.330731   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:44.330731   12816 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 16:58:46.283136   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 16:58:46.283253   12816 machine.go:97] duration metric: took 41.3814279s to provisionDockerMachine
	I0416 16:58:46.283253   12816 client.go:171] duration metric: took 1m44.9797412s to LocalClient.Create
	I0416 16:58:46.283253   12816 start.go:167] duration metric: took 1m44.9797412s to libmachine.API.Create "ha-022600"
	I0416 16:58:46.283253   12816 start.go:293] postStartSetup for "ha-022600-m02" (driver="hyperv")
	I0416 16:58:46.283345   12816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:58:46.292724   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:58:46.292724   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:48.207625   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:48.207625   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:48.207625   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:50.480821   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:50.480821   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:50.480821   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:58:50.575284   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2823171s)
	I0416 16:58:50.584260   12816 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:58:50.591292   12816 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:58:50.591292   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 16:58:50.591292   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 16:58:50.591900   12816 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 16:58:50.591900   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 16:58:50.601073   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 16:58:50.618807   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 16:58:50.671301   12816 start.go:296] duration metric: took 4.3877068s for postStartSetup
	I0416 16:58:50.673161   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:52.621684   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:52.622252   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:52.622252   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:54.923435   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:54.923435   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:54.923763   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:58:54.926483   12816 start.go:128] duration metric: took 1m53.622481s to createHost
	I0416 16:58:54.926657   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:56.793105   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:56.793105   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:56.793184   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:59.024255   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:59.025184   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:59.029108   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:59.029633   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:59.029730   12816 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 16:58:59.149333   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713286739.315259098
	
	I0416 16:58:59.149333   12816 fix.go:216] guest clock: 1713286739.315259098
	I0416 16:58:59.149333   12816 fix.go:229] Guest: 2024-04-16 16:58:59.315259098 +0000 UTC Remote: 2024-04-16 16:58:54.9265716 +0000 UTC m=+304.925199701 (delta=4.388687498s)
	I0416 16:58:59.149333   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:01.054656   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:01.054831   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:01.054831   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:03.303195   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:03.303195   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:03.307071   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:59:03.307459   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:59:03.307531   12816 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713286739
	I0416 16:59:03.449024   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 16:58:59 UTC 2024
	
	I0416 16:59:03.449024   12816 fix.go:236] clock set: Tue Apr 16 16:58:59 UTC 2024
	 (err=<nil>)
	I0416 16:59:03.449024   12816 start.go:83] releasing machines lock for "ha-022600-m02", held for 2m2.1447745s
	I0416 16:59:03.450039   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:05.434998   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:05.434998   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:05.434998   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:07.737918   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:07.737918   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:07.739042   12816 out.go:177] * Found network options:
	I0416 16:59:07.739784   12816 out.go:177]   - NO_PROXY=172.19.81.207
	W0416 16:59:07.740404   12816 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 16:59:07.741027   12816 out.go:177]   - NO_PROXY=172.19.81.207
	W0416 16:59:07.741505   12816 proxy.go:119] fail to check proxy env: Error ip not in block
	W0416 16:59:07.742708   12816 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 16:59:07.744988   12816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:59:07.745153   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:07.752817   12816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0416 16:59:07.752817   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:09.758953   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:12.157582   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:12.158536   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:12.159044   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:59:12.184719   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:12.184719   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:12.185179   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:59:12.257436   12816 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5043642s)
	W0416 16:59:12.257436   12816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:59:12.266545   12816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:59:12.367206   12816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 16:59:12.367296   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:59:12.367330   12816 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6219642s)
	I0416 16:59:12.367330   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:59:12.423201   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 16:59:12.453988   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 16:59:12.472992   12816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 16:59:12.482991   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 16:59:12.510864   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:59:12.538866   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 16:59:12.565866   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:59:12.597751   12816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:59:12.622761   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 16:59:12.648905   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 16:59:12.674904   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 16:59:12.713452   12816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:59:12.741495   12816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:59:12.768497   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:59:12.975524   12816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 16:59:13.011635   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:59:13.023647   12816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 16:59:13.058146   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:59:13.091991   12816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 16:59:13.139058   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:59:13.173081   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:59:13.208242   12816 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 16:59:13.259511   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:59:13.282094   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:59:13.329081   12816 ssh_runner.go:195] Run: which cri-dockerd
	I0416 16:59:13.344832   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 16:59:13.362131   12816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 16:59:13.403377   12816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 16:59:13.597444   12816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 16:59:13.768147   12816 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 16:59:13.768278   12816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 16:59:13.808294   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:59:13.987216   12816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 17:00:15.104612   12816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1138396s)
	I0416 17:00:15.115049   12816 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0416 17:00:15.145752   12816 out.go:177] 
	W0416 17:00:15.146473   12816 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 16 16:58:45 ha-022600-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.076842920Z" level=info msg="Starting up"
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.077687177Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.078706068Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.109665355Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138411128Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138448735Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138508447Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138523049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138600164Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138632670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138848110Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138955930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139030244Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139045347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139142365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139433520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142495192Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142588309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142778845Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142795748Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143044695Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143174419Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143191422Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.152862930Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153144583Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153313214Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153337519Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153354522Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153467543Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153957434Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154159572Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154195179Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154212082Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154230586Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154258491Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154272393Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154287696Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154303599Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154317302Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154330504Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154344107Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154373612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154392516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154406618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154421121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154434024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154447526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154460128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154474031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154498536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154514539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154525841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154555046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154568249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154583952Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154604755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154629960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154642062Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154700973Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154916114Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155014532Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155030135Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155203567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155302486Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155325090Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155706861Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155796078Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155907599Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155947306Z" level=info msg="containerd successfully booted in 0.047582s"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.119001526Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.129323458Z" level=info msg="Loading containers: start."
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.358382320Z" level=info msg="Loading containers: done."
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.377033580Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.377149301Z" level=info msg="Daemon has completed initialization"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.447556885Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 16:58:46 ha-022600-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.449134569Z" level=info msg="API listen on [::]:2376"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.178053148Z" level=info msg="Processing signal 'terminated'"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.179830517Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 16:59:14 ha-022600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.180814055Z" level=info msg="Daemon shutdown complete"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.181020363Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.181054564Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 16:59:15 ha-022600-m02 dockerd[1019]: time="2024-04-16T16:59:15.248212596Z" level=info msg="Starting up"
	Apr 16 17:00:15 ha-022600-m02 dockerd[1019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0416 17:00:15.146611   12816 out.go:239] * 
	W0416 17:00:15.147806   12816 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 17:00:15.148383   12816 out.go:177] 
	
	
	==> Docker <==
	Apr 16 17:17:47 ha-022600 dockerd[1325]: 2024/04/16 17:17:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:18:34 ha-022600 dockerd[1325]: 2024/04/16 17:18:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:18:34 ha-022600 dockerd[1325]: 2024/04/16 17:18:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:18:34 ha-022600 dockerd[1325]: 2024/04/16 17:18:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:18:34 ha-022600 dockerd[1325]: 2024/04/16 17:18:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:18:34 ha-022600 dockerd[1325]: 2024/04/16 17:18:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:18:34 ha-022600 dockerd[1325]: 2024/04/16 17:18:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:18:34 ha-022600 dockerd[1325]: 2024/04/16 17:18:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:18:34 ha-022600 dockerd[1325]: 2024/04/16 17:18:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:19:36 ha-022600 dockerd[1325]: 2024/04/16 17:19:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:19:36 ha-022600 dockerd[1325]: 2024/04/16 17:19:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:19:36 ha-022600 dockerd[1325]: 2024/04/16 17:19:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:19:36 ha-022600 dockerd[1325]: 2024/04/16 17:19:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:19:36 ha-022600 dockerd[1325]: 2024/04/16 17:19:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:19:36 ha-022600 dockerd[1325]: 2024/04/16 17:19:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:19:36 ha-022600 dockerd[1325]: 2024/04/16 17:19:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:19:36 ha-022600 dockerd[1325]: 2024/04/16 17:19:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:21:09 ha-022600 dockerd[1325]: 2024/04/16 17:21:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:21:09 ha-022600 dockerd[1325]: 2024/04/16 17:21:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:21:09 ha-022600 dockerd[1325]: 2024/04/16 17:21:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:21:09 ha-022600 dockerd[1325]: 2024/04/16 17:21:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:21:09 ha-022600 dockerd[1325]: 2024/04/16 17:21:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:21:10 ha-022600 dockerd[1325]: 2024/04/16 17:21:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:21:10 ha-022600 dockerd[1325]: 2024/04/16 17:21:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:21:10 ha-022600 dockerd[1325]: 2024/04/16 17:21:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d38b1a5f4caa8       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   21 minutes ago      Running             busybox                   0                   8a4de3aa24af1       busybox-7fdf7869d9-rpfpf
	3fe545bfad4e6       cbb01a7bd410d                                                                                         24 minutes ago      Running             coredns                   0                   093278b3840ef       coredns-76f75df574-qm89x
	979dee88be2b4       cbb01a7bd410d                                                                                         24 minutes ago      Running             coredns                   0                   4ad38b0d59335       coredns-76f75df574-ww2r6
	257879ecf06b2       6e38f40d628db                                                                                         24 minutes ago      Running             storage-provisioner       0                   bf991c3e34e2d       storage-provisioner
	be245de9ef545       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              24 minutes ago      Running             kindnet-cni               0                   92c35b3fd0967       kindnet-mwqvl
	05db92f49e0df       a1d263b5dc5b0                                                                                         24 minutes ago      Running             kube-proxy                0                   12380f49c1509       kube-proxy-2vddt
	d1ba82cd26254       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     25 minutes ago      Running             kube-vip                  0                   fa2c75c4c8d59       kube-vip-ha-022600
	a7fb69539df62       6052a25da3f97                                                                                         25 minutes ago      Running             kube-controller-manager   0                   b536621e20d4b       kube-controller-manager-ha-022600
	4fd5df8c9fd37       39f995c9f1996                                                                                         25 minutes ago      Running             kube-apiserver            0                   5a7a1e80caeb4       kube-apiserver-ha-022600
	e042d71e8b0e8       8c390d98f50c0                                                                                         25 minutes ago      Running             kube-scheduler            0                   5a2551c91a1b6       kube-scheduler-ha-022600
	c29b0762ff0bf       3861cfcd7c04c                                                                                         25 minutes ago      Running             etcd                      0                   c8a9aa3126cf5       etcd-ha-022600
	
	
	==> coredns [3fe545bfad4e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = db872c9fdf31f8d8ff61123f2a1e38a38b951fa043b9e42cdb76f01d23889e560885a7bdef735e757fd28e65f13e44b1d5d7b5def31861f6a98cd0279fbc18c8
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:55981 - 4765 "HINFO IN 3735046377920793891.8143170502200932773. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.058936595s
	[INFO] 10.244.0.4:43350 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000388921s
	[INFO] 10.244.0.4:35317 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.052221997s
	[INFO] 10.244.0.4:52074 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.040069369s
	[INFO] 10.244.0.4:49068 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.053312593s
	[INFO] 10.244.0.4:54711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123507s
	[INFO] 10.244.0.4:44694 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037006811s
	[INFO] 10.244.0.4:33399 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124606s
	[INFO] 10.244.0.4:37329 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000241612s
	[INFO] 10.244.0.4:57333 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131407s
	[INFO] 10.244.0.4:38806 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060403s
	[INFO] 10.244.0.4:48948 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000263914s
	[INFO] 10.244.0.4:51825 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000177309s
	[INFO] 10.244.0.4:53272 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00018301s
	
	
	==> coredns [979dee88be2b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = db872c9fdf31f8d8ff61123f2a1e38a38b951fa043b9e42cdb76f01d23889e560885a7bdef735e757fd28e65f13e44b1d5d7b5def31861f6a98cd0279fbc18c8
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50127 - 24072 "HINFO IN 7665836187497317301.2285362183679153792. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027543487s
	[INFO] 10.244.0.4:34822 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000224011s
	[INFO] 10.244.0.4:48911 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000349218s
	[INFO] 10.244.0.4:43369 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.023699624s
	[INFO] 10.244.0.4:56309 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000258914s
	[INFO] 10.244.0.4:36791 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.003463479s
	[INFO] 10.244.0.4:55996 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000301816s
	[INFO] 10.244.0.4:35967 - 5 "PTR IN 1.80.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000116506s
	
	
	==> describe nodes <==
	Name:               ha-022600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-022600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=ha-022600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T16_56_39_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 16:56:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-022600
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 17:21:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 17:21:40 +0000   Tue, 16 Apr 2024 16:56:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 17:21:40 +0000   Tue, 16 Apr 2024 16:56:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 17:21:40 +0000   Tue, 16 Apr 2024 16:56:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 17:21:40 +0000   Tue, 16 Apr 2024 16:57:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.81.207
	  Hostname:    ha-022600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b4674338fa494bbcb2e21e2b4385c5e1
	  System UUID:                201025fc-0c03-cc49-a194-29d6500971a2
	  Boot ID:                    6ae5bedd-6e8e-4f58-b08c-8e9912fd04de
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-rpfpf             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-76f75df574-qm89x             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     24m
	  kube-system                 coredns-76f75df574-ww2r6             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     24m
	  kube-system                 etcd-ha-022600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         25m
	  kube-system                 kindnet-mwqvl                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-apiserver-ha-022600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-controller-manager-ha-022600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-proxy-2vddt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-scheduler-ha-022600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-vip-ha-022600                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24m   kube-proxy       
	  Normal  Starting                 25m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  25m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  25m   kubelet          Node ha-022600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25m   kubelet          Node ha-022600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25m   kubelet          Node ha-022600 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25m   node-controller  Node ha-022600 event: Registered Node ha-022600 in Controller
	  Normal  NodeReady                24m   kubelet          Node ha-022600 status is now: NodeReady
	
	
	Name:               ha-022600-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-022600-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=ha-022600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_16T17_16_38_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 17:16:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-022600-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 17:21:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 17:21:43 +0000   Tue, 16 Apr 2024 17:16:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 17:21:43 +0000   Tue, 16 Apr 2024 17:16:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 17:21:43 +0000   Tue, 16 Apr 2024 17:16:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 17:21:43 +0000   Tue, 16 Apr 2024 17:16:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.93.94
	  Hostname:    ha-022600-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 cefa169b716045589e59382d0939ad48
	  System UUID:                25782c5b-4e02-0547-b063-db6b9c5f1f5b
	  Boot ID:                    e7c67d41-aa2d-47a1-952b-fa7ff5422e05
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-mnl84    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kindnet-7c2px               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m13s
	  kube-system                 kube-proxy-ss5lp            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m3s                   kube-proxy       
	  Normal  Starting                 5m13s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m13s (x2 over 5m13s)  kubelet          Node ha-022600-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m13s (x2 over 5m13s)  kubelet          Node ha-022600-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m13s (x2 over 5m13s)  kubelet          Node ha-022600-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m9s                   node-controller  Node ha-022600-m03 event: Registered Node ha-022600-m03 in Controller
	  Normal  NodeReady                4m56s                  kubelet          Node ha-022600-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.656516] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr16 16:55] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.165290] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[Apr16 16:56] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.091843] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.493988] systemd-fstab-generator[987]: Ignoring "noauto" option for root device
	[  +0.172637] systemd-fstab-generator[999]: Ignoring "noauto" option for root device
	[  +0.230010] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	[  +2.695048] systemd-fstab-generator[1185]: Ignoring "noauto" option for root device
	[  +0.219400] systemd-fstab-generator[1197]: Ignoring "noauto" option for root device
	[  +0.196554] systemd-fstab-generator[1209]: Ignoring "noauto" option for root device
	[  +0.267217] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
	[ +11.053282] systemd-fstab-generator[1317]: Ignoring "noauto" option for root device
	[  +0.095458] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.012264] systemd-fstab-generator[1521]: Ignoring "noauto" option for root device
	[  +6.758798] systemd-fstab-generator[1720]: Ignoring "noauto" option for root device
	[  +0.093227] kauditd_printk_skb: 73 callbacks suppressed
	[  +7.850543] systemd-fstab-generator[2213]: Ignoring "noauto" option for root device
	[  +0.130310] kauditd_printk_skb: 72 callbacks suppressed
	[ +15.381320] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.386371] kauditd_printk_skb: 29 callbacks suppressed
	[Apr16 17:00] hrtimer: interrupt took 5042261 ns
	[  +0.908827] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [c29b0762ff0b] <==
	{"level":"info","ts":"2024-04-16T17:11:33.360995Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1502}
	{"level":"info","ts":"2024-04-16T17:11:33.366072Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1502,"took":"4.116913ms","hash":127222243,"current-db-size-bytes":2433024,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1818624,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-04-16T17:11:33.366162Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":127222243,"revision":1502,"compact-revision":964}
	{"level":"info","ts":"2024-04-16T17:15:11.421098Z","caller":"traceutil/trace.go:171","msg":"trace[1208553513] transaction","detail":"{read_only:false; response_revision:2431; number_of_response:1; }","duration":"155.410586ms","start":"2024-04-16T17:15:11.265667Z","end":"2024-04-16T17:15:11.421077Z","steps":["trace[1208553513] 'process raft request'  (duration: 155.135072ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:16:31.529032Z","caller":"traceutil/trace.go:171","msg":"trace[505251683] linearizableReadLoop","detail":"{readStateIndex:2832; appliedIndex:2831; }","duration":"107.445309ms","start":"2024-04-16T17:16:31.421572Z","end":"2024-04-16T17:16:31.529017Z","steps":["trace[505251683] 'read index received'  (duration: 107.319103ms)","trace[505251683] 'applied index is now lower than readState.Index'  (duration: 125.606µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-16T17:16:31.529184Z","caller":"traceutil/trace.go:171","msg":"trace[359290184] transaction","detail":"{read_only:false; response_revision:2575; number_of_response:1; }","duration":"197.441024ms","start":"2024-04-16T17:16:31.331735Z","end":"2024-04-16T17:16:31.529176Z","steps":["trace[359290184] 'process raft request'  (duration: 197.196912ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:16:31.529431Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.83703ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1110"}
	{"level":"info","ts":"2024-04-16T17:16:31.52969Z","caller":"traceutil/trace.go:171","msg":"trace[1576069612] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2575; }","duration":"108.130545ms","start":"2024-04-16T17:16:31.421545Z","end":"2024-04-16T17:16:31.529676Z","steps":["trace[1576069612] 'agreement among raft nodes before linearized reading'  (duration: 107.801628ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:16:33.727834Z","caller":"traceutil/trace.go:171","msg":"trace[1449824028] transaction","detail":"{read_only:false; response_revision:2578; number_of_response:1; }","duration":"364.497189ms","start":"2024-04-16T17:16:33.363317Z","end":"2024-04-16T17:16:33.727815Z","steps":["trace[1449824028] 'process raft request'  (duration: 364.339681ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:16:33.729115Z","caller":"traceutil/trace.go:171","msg":"trace[948704194] linearizableReadLoop","detail":"{readStateIndex:2837; appliedIndex:2836; }","duration":"283.56914ms","start":"2024-04-16T17:16:33.445533Z","end":"2024-04-16T17:16:33.729102Z","steps":["trace[948704194] 'read index received'  (duration: 282.906606ms)","trace[948704194] 'applied index is now lower than readState.Index'  (duration: 662.034µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-16T17:16:33.72965Z","caller":"traceutil/trace.go:171","msg":"trace[1908879286] transaction","detail":"{read_only:false; response_revision:2579; number_of_response:1; }","duration":"291.495046ms","start":"2024-04-16T17:16:33.438143Z","end":"2024-04-16T17:16:33.729638Z","steps":["trace[1908879286] 'process raft request'  (duration: 290.677204ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:16:33.729668Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-16T17:16:33.363297Z","time spent":"364.643596ms","remote":"127.0.0.1:49456","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":25,"response count":0,"response size":38,"request content":"compare:<key:\"compact_rev_key\" version:3 > success:<request_put:<key:\"compact_rev_key\" value_size:4 >> failure:<request_range:<key:\"compact_rev_key\" > >"}
	{"level":"warn","ts":"2024-04-16T17:16:33.729962Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.040139ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:434"}
	{"level":"info","ts":"2024-04-16T17:16:33.73064Z","caller":"traceutil/trace.go:171","msg":"trace[1591257630] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:2579; }","duration":"186.677072ms","start":"2024-04-16T17:16:33.543885Z","end":"2024-04-16T17:16:33.730562Z","steps":["trace[1591257630] 'agreement among raft nodes before linearized reading'  (duration: 185.842129ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:16:33.730022Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"284.488987ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-16T17:16:33.731097Z","caller":"traceutil/trace.go:171","msg":"trace[339406949] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2579; }","duration":"285.581443ms","start":"2024-04-16T17:16:33.445505Z","end":"2024-04-16T17:16:33.731087Z","steps":["trace[339406949] 'agreement among raft nodes before linearized reading'  (duration: 284.501387ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:16:33.730066Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.750168ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1110"}
	{"level":"info","ts":"2024-04-16T17:16:33.731323Z","caller":"traceutil/trace.go:171","msg":"trace[1323315847] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2579; }","duration":"143.028733ms","start":"2024-04-16T17:16:33.588284Z","end":"2024-04-16T17:16:33.731313Z","steps":["trace[1323315847] 'agreement among raft nodes before linearized reading'  (duration: 141.746268ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:16:33.740796Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2041}
	{"level":"info","ts":"2024-04-16T17:16:33.745817Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2041,"took":"4.568334ms","hash":1427640317,"current-db-size-bytes":2433024,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1806336,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-04-16T17:16:33.746025Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1427640317,"revision":2041,"compact-revision":1502}
	{"level":"info","ts":"2024-04-16T17:16:40.98492Z","caller":"traceutil/trace.go:171","msg":"trace[2045382782] transaction","detail":"{read_only:false; response_revision:2627; number_of_response:1; }","duration":"150.576419ms","start":"2024-04-16T17:16:40.834317Z","end":"2024-04-16T17:16:40.984893Z","steps":["trace[2045382782] 'process raft request'  (duration: 150.385009ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:21:33.757276Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2578}
	{"level":"info","ts":"2024-04-16T17:21:33.762061Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2578,"took":"4.259818ms","hash":879522910,"current-db-size-bytes":2433024,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1994752,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-04-16T17:21:33.762168Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":879522910,"revision":2578,"compact-revision":2041}
	
	
	==> kernel <==
	 17:21:50 up 27 min,  0 users,  load average: 0.27, 0.26, 0.20
	Linux ha-022600 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [be245de9ef54] <==
	I0416 17:20:41.799281       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:20:51.811869       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:20:51.811908       1 main.go:227] handling current node
	I0416 17:20:51.811922       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:20:51.811930       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:21:01.820852       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:21:01.820986       1 main.go:227] handling current node
	I0416 17:21:01.821001       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:21:01.821009       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:21:11.830133       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:21:11.830230       1 main.go:227] handling current node
	I0416 17:21:11.830243       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:21:11.830251       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:21:21.836886       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:21:21.837276       1 main.go:227] handling current node
	I0416 17:21:21.837376       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:21:21.837456       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:21:31.848669       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:21:31.848826       1 main.go:227] handling current node
	I0416 17:21:31.849115       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:21:31.849189       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:21:41.854073       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:21:41.854170       1 main.go:227] handling current node
	I0416 17:21:41.854183       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:21:41.854191       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [4fd5df8c9fd3] <==
	I0416 16:56:35.510308       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0416 16:56:35.512679       1 aggregator.go:165] initial CRD sync complete...
	I0416 16:56:35.516211       1 autoregister_controller.go:141] Starting autoregister controller
	I0416 16:56:35.516249       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0416 16:56:35.516256       1 cache.go:39] Caches are synced for autoregister controller
	I0416 16:56:35.517473       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0416 16:56:35.522352       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0416 16:56:35.529558       1 controller.go:624] quota admission added evaluator for: namespaces
	I0416 16:56:35.542494       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0416 16:56:36.411016       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0416 16:56:36.418409       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0416 16:56:36.419376       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0416 16:56:37.172553       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0416 16:56:37.235069       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0416 16:56:37.370838       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0416 16:56:37.381797       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.81.207]
	I0416 16:56:37.383264       1 controller.go:624] quota admission added evaluator for: endpoints
	I0416 16:56:37.388718       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0416 16:56:37.435733       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0416 16:56:38.737496       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0416 16:56:38.764389       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0416 16:56:38.781093       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0416 16:56:51.466047       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0416 16:56:51.868826       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	http2: server: error reading preface from client 172.19.93.94:54156: read tcp 172.19.95.254:8443->172.19.93.94:54156: read: connection reset by peer
	
	
	==> kube-controller-manager [a7fb69539df6] <==
	I0416 16:57:07.224903       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="88.905µs"
	I0416 16:57:07.277301       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="19.898845ms"
	I0416 16:57:07.277810       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="53.303µs"
	I0416 17:00:45.709324       1 event.go:376] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7fdf7869d9 to 3"
	I0416 17:00:45.728545       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-rpfpf"
	I0416 17:00:45.745464       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-mnl84"
	I0416 17:00:45.756444       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-gph6r"
	I0416 17:00:45.770175       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="62.082711ms"
	I0416 17:00:45.784213       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="13.744211ms"
	I0416 17:00:45.810992       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="26.530372ms"
	I0416 17:00:45.811146       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="42.802µs"
	I0416 17:00:48.413892       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="8.465463ms"
	I0416 17:00:48.413981       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="40.302µs"
	I0416 17:16:37.436480       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-022600-m03\" does not exist"
	I0416 17:16:37.446130       1 range_allocator.go:380] "Set node PodCIDR" node="ha-022600-m03" podCIDRs=["10.244.1.0/24"]
	I0416 17:16:37.459239       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7c2px"
	I0416 17:16:37.461522       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ss5lp"
	I0416 17:16:41.186805       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-022600-m03"
	I0416 17:16:41.187824       1 event.go:376] "Event occurred" object="ha-022600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-022600-m03 event: Registered Node ha-022600-m03 in Controller"
	I0416 17:16:54.835196       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-022600-m03"
	I0416 17:21:10.057845       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="1.643684ms"
	I0416 17:21:10.062178       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="53.103µs"
	I0416 17:21:10.084166       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="65.503µs"
	I0416 17:21:12.764100       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="8.960957ms"
	I0416 17:21:12.764437       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="234.912µs"
	
	
	==> kube-proxy [05db92f49e0d] <==
	I0416 16:56:54.468581       1 server_others.go:72] "Using iptables proxy"
	I0416 16:56:54.505964       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.81.207"]
	I0416 16:56:54.583838       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 16:56:54.584172       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 16:56:54.584273       1 server_others.go:168] "Using iptables Proxier"
	I0416 16:56:54.590060       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 16:56:54.590806       1 server.go:865] "Version info" version="v1.29.3"
	I0416 16:56:54.591014       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 16:56:54.592331       1 config.go:188] "Starting service config controller"
	I0416 16:56:54.592517       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 16:56:54.592625       1 config.go:97] "Starting endpoint slice config controller"
	I0416 16:56:54.592689       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 16:56:54.594058       1 config.go:315] "Starting node config controller"
	I0416 16:56:54.594215       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 16:56:54.693900       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 16:56:54.693964       1 shared_informer.go:318] Caches are synced for service config
	I0416 16:56:54.694328       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [e042d71e8b0e] <==
	W0416 16:56:36.501819       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0416 16:56:36.501922       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0416 16:56:36.507709       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0416 16:56:36.507948       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0416 16:56:36.573671       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 16:56:36.573877       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 16:56:36.602162       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 16:56:36.602205       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 16:56:36.621966       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.622272       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0416 16:56:36.648392       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0416 16:56:36.648623       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0416 16:56:36.694872       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 16:56:36.694970       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 16:56:36.804118       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.805424       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 16:56:36.821863       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.822231       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 16:56:36.866017       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 16:56:36.866298       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0416 16:56:36.904820       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 16:56:36.905097       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 16:56:36.917996       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.918036       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0416 16:56:39.298679       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 17:17:38 ha-022600 kubelet[2220]: E0416 17:17:38.995310    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:17:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:17:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:17:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:17:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:18:38 ha-022600 kubelet[2220]: E0416 17:18:38.994865    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:18:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:18:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:18:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:18:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:19:38 ha-022600 kubelet[2220]: E0416 17:19:38.994994    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:19:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:19:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:19:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:19:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:20:38 ha-022600 kubelet[2220]: E0416 17:20:38.994897    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:20:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:20:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:20:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:20:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:21:39 ha-022600 kubelet[2220]: E0416 17:21:39.001981    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:21:39 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:21:39 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:21:39 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:21:39 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 17:21:43.361365   14108 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-022600 -n ha-022600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-022600 -n ha-022600: (10.8389104s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-022600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7fdf7869d9-gph6r
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-022600 describe pod busybox-7fdf7869d9-gph6r
helpers_test.go:282: (dbg) kubectl --context ha-022600 describe pod busybox-7fdf7869d9-gph6r:

                                                
                                                
-- stdout --
	Name:             busybox-7fdf7869d9-gph6r
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7fdf7869d9
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7fdf7869d9
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h29q5 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-h29q5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  5m52s (x4 over 21m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  52s                  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (40.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (196.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-022600 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-022600 node start m02 -v=7 --alsologtostderr: exit status 1 (1m47.6817927s)

                                                
                                                
-- stdout --
	* Starting "ha-022600-m02" control-plane node in "ha-022600" cluster
	* Restarting existing hyperv VM for "ha-022600-m02" ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 17:22:02.497169   12952 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0416 17:22:02.552887   12952 out.go:291] Setting OutFile to fd 604 ...
	I0416 17:22:02.571584   12952 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:22:02.571669   12952 out.go:304] Setting ErrFile to fd 960...
	I0416 17:22:02.571669   12952 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:22:02.585144   12952 mustload.go:65] Loading cluster: ha-022600
	I0416 17:22:02.586046   12952 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 17:22:02.586046   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 17:22:04.482964   12952 main.go:141] libmachine: [stdout =====>] : Off
	
	I0416 17:22:04.482964   12952 main.go:141] libmachine: [stderr =====>] : 
	W0416 17:22:04.482964   12952 host.go:58] "ha-022600-m02" host status: Stopped
	I0416 17:22:04.484265   12952 out.go:177] * Starting "ha-022600-m02" control-plane node in "ha-022600" cluster
	I0416 17:22:04.484988   12952 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 17:22:04.485176   12952 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0416 17:22:04.485176   12952 cache.go:56] Caching tarball of preloaded images
	I0416 17:22:04.485689   12952 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 17:22:04.485880   12952 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 17:22:04.485930   12952 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 17:22:04.487872   12952 start.go:360] acquireMachinesLock for ha-022600-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 17:22:04.488399   12952 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-022600-m02"
	I0416 17:22:04.488481   12952 start.go:96] Skipping create...Using existing machine configuration
	I0416 17:22:04.488481   12952 fix.go:54] fixHost starting: m02
	I0416 17:22:04.489083   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 17:22:06.477091   12952 main.go:141] libmachine: [stdout =====>] : Off
	
	I0416 17:22:06.478011   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:22:06.478011   12952 fix.go:112] recreateIfNeeded on ha-022600-m02: state=Stopped err=<nil>
	W0416 17:22:06.478011   12952 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 17:22:06.478751   12952 out.go:177] * Restarting existing hyperv VM for "ha-022600-m02" ...
	I0416 17:22:06.479382   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-022600-m02
	I0416 17:22:09.092077   12952 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:22:09.092077   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:22:09.092077   12952 main.go:141] libmachine: Waiting for host to start...
	I0416 17:22:09.092077   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 17:22:11.154535   12952 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:22:11.154535   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:22:11.154705   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:22:13.479800   12952 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:22:13.479800   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:22:14.489994   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 17:22:16.435974   12952 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:22:16.435974   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:22:16.435974   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:22:18.733476   12952 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:22:18.733550   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:22:19.747790   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 17:22:21.758072   12952 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:22:21.758832   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:22:21.758832   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:22:23.989153   12952 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:22:23.989153   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:22:25.004951   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 17:22:26.964462   12952 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:22:26.964984   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:22:26.965086   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:22:29.206226   12952 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:22:29.206296   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:22:30.207350   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 17:22:32.189656   12952 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:22:32.189656   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:22:32.190691   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:22:34.510965   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225
	
	I0416 17:22:34.510965   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:22:34.513223   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 17:22:36.395019   12952 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:22:36.395019   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:22:36.396066   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:22:38.647912   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225
	
	I0416 17:22:38.647912   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:22:38.647912   12952 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 17:22:38.649804   12952 machine.go:94] provisionDockerMachine start ...
	I0416 17:22:38.649890   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 17:22:40.529171   12952 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:22:40.529171   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:22:40.529243   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:22:42.806051   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225
	
	I0416 17:22:42.806051   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:22:42.809900   12952 main.go:141] libmachine: Using SSH client type: native
	I0416 17:22:42.809970   12952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.225 22 <nil> <nil>}
	I0416 17:22:42.809970   12952 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 17:22:42.939760   12952 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 17:22:42.939760   12952 buildroot.go:166] provisioning hostname "ha-022600-m02"
	I0416 17:22:42.939760   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 17:22:44.806278   12952 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:22:44.806341   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:22:44.806341   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:22:47.112671   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225
	
	I0416 17:22:47.112671   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:22:47.116770   12952 main.go:141] libmachine: Using SSH client type: native
	I0416 17:22:47.116770   12952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.225 22 <nil> <nil>}
	I0416 17:22:47.116770   12952 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-022600-m02 && echo "ha-022600-m02" | sudo tee /etc/hostname
	I0416 17:22:47.280612   12952 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-022600-m02
	
	I0416 17:22:47.280690   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 17:22:49.237516   12952 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:22:49.237516   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:22:49.237605   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:22:51.530099   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225
	
	I0416 17:22:51.531022   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:22:51.534532   12952 main.go:141] libmachine: Using SSH client type: native
	I0416 17:22:51.535070   12952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.225 22 <nil> <nil>}
	I0416 17:22:51.535176   12952 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-022600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-022600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-022600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 17:22:51.688257   12952 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 17:22:51.688257   12952 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 17:22:51.688257   12952 buildroot.go:174] setting up certificates
	I0416 17:22:51.688257   12952 provision.go:84] configureAuth start
	I0416 17:22:51.688257   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 17:22:53.665813   12952 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:22:53.666164   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:22:53.666164   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:22:55.971227   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225
	
	I0416 17:22:55.971227   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:22:55.971317   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 17:22:57.881681   12952 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:22:57.881681   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:22:57.882441   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:23:00.198205   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225
	
	I0416 17:23:00.198233   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:23:00.198233   12952 provision.go:143] copyHostCerts
	I0416 17:23:00.198441   12952 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 17:23:00.198650   12952 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 17:23:00.198650   12952 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 17:23:00.199007   12952 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 17:23:00.199933   12952 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 17:23:00.200146   12952 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 17:23:00.200146   12952 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 17:23:00.200146   12952 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 17:23:00.201128   12952 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 17:23:00.201296   12952 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 17:23:00.201365   12952 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 17:23:00.201588   12952 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 17:23:00.202400   12952 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-022600-m02 san=[127.0.0.1 172.19.91.225 ha-022600-m02 localhost minikube]
	I0416 17:23:00.337954   12952 provision.go:177] copyRemoteCerts
	I0416 17:23:00.347364   12952 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 17:23:00.347364   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 17:23:02.228680   12952 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:23:02.228680   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:23:02.229056   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:23:04.482911   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225
	
	I0416 17:23:04.484012   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:23:04.484399   12952 sshutil.go:53] new ssh client: &{IP:172.19.91.225 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 17:23:04.599886   12952 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.252281s)
	I0416 17:23:04.599886   12952 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 17:23:04.599886   12952 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 17:23:04.647886   12952 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 17:23:04.649570   12952 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0416 17:23:04.701062   12952 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 17:23:04.701576   12952 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 17:23:04.747967   12952 provision.go:87] duration metric: took 13.0589703s to configureAuth
	I0416 17:23:04.747967   12952 buildroot.go:189] setting minikube options for container-runtime
	I0416 17:23:04.748574   12952 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 17:23:04.748574   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 17:23:06.729225   12952 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:23:06.729566   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:23:06.729566   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:23:09.066597   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225
	
	I0416 17:23:09.067003   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:23:09.071409   12952 main.go:141] libmachine: Using SSH client type: native
	I0416 17:23:09.071938   12952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.225 22 <nil> <nil>}
	I0416 17:23:09.071938   12952 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 17:23:09.215715   12952 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 17:23:09.215715   12952 buildroot.go:70] root file system type: tmpfs
	I0416 17:23:09.216244   12952 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 17:23:09.216244   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 17:23:11.207418   12952 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:23:11.207418   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:23:11.207489   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:23:13.562836   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225
	
	I0416 17:23:13.562836   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:23:13.567717   12952 main.go:141] libmachine: Using SSH client type: native
	I0416 17:23:13.568140   12952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.225 22 <nil> <nil>}
	I0416 17:23:13.568140   12952 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 17:23:13.721335   12952 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 17:23:13.721434   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 17:23:15.639482   12952 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:23:15.639482   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:23:15.639589   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:23:17.935283   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225
	
	I0416 17:23:17.935283   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:23:17.939268   12952 main.go:141] libmachine: Using SSH client type: native
	I0416 17:23:17.939268   12952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.225 22 <nil> <nil>}
	I0416 17:23:17.939786   12952 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 17:23:19.989156   12952 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 17:23:19.989156   12952 machine.go:97] duration metric: took 41.3370085s to provisionDockerMachine
	I0416 17:23:19.989156   12952 start.go:293] postStartSetup for "ha-022600-m02" (driver="hyperv")
	I0416 17:23:19.989156   12952 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 17:23:19.997733   12952 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 17:23:19.997733   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 17:23:21.928360   12952 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:23:21.928360   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:23:21.928360   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:23:24.194079   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225
	
	I0416 17:23:24.194079   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:23:24.194354   12952 sshutil.go:53] new ssh client: &{IP:172.19.91.225 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 17:23:24.312621   12952 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3146434s)
	I0416 17:23:24.320868   12952 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 17:23:24.327299   12952 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 17:23:24.327383   12952 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 17:23:24.327777   12952 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 17:23:24.328332   12952 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 17:23:24.328332   12952 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 17:23:24.336760   12952 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 17:23:24.354914   12952 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 17:23:24.397854   12952 start.go:296] duration metric: took 4.408448s for postStartSetup
	I0416 17:23:24.405659   12952 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0416 17:23:24.405659   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 17:23:26.330995   12952 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:23:26.332002   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:23:26.332090   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:23:28.606847   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225
	
	I0416 17:23:28.606847   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:23:28.607193   12952 sshutil.go:53] new ssh client: &{IP:172.19.91.225 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 17:23:28.714198   12952 ssh_runner.go:235] Completed: sudo ls --almost-all -1 /var/lib/minikube/backup: (4.3082949s)
	I0416 17:23:28.714307   12952 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0416 17:23:28.721895   12952 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0416 17:23:28.788082   12952 fix.go:56] duration metric: took 1m24.2948214s for fixHost
	I0416 17:23:28.788082   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 17:23:30.706837   12952 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:23:30.706877   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:23:30.707047   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:23:32.989083   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225
	
	I0416 17:23:32.989083   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:23:32.993311   12952 main.go:141] libmachine: Using SSH client type: native
	I0416 17:23:32.993390   12952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.225 22 <nil> <nil>}
	I0416 17:23:32.993390   12952 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0416 17:23:33.131848   12952 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713288213.295549337
	
	I0416 17:23:33.131954   12952 fix.go:216] guest clock: 1713288213.295549337
	I0416 17:23:33.131954   12952 fix.go:229] Guest: 2024-04-16 17:23:33.295549337 +0000 UTC Remote: 2024-04-16 17:23:28.7880828 +0000 UTC m=+86.374524701 (delta=4.507466537s)
	I0416 17:23:33.131954   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 17:23:35.091294   12952 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:23:35.091294   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:23:35.091373   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:23:37.366077   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225
	
	I0416 17:23:37.366156   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:23:37.370095   12952 main.go:141] libmachine: Using SSH client type: native
	I0416 17:23:37.370492   12952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.225 22 <nil> <nil>}
	I0416 17:23:37.370492   12952 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713288213
	I0416 17:23:37.525429   12952 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 17:23:33 UTC 2024
	
	I0416 17:23:37.525429   12952 fix.go:236] clock set: Tue Apr 16 17:23:33 UTC 2024
	 (err=<nil>)
	I0416 17:23:37.525429   12952 start.go:83] releasing machines lock for "ha-022600-m02", held for 1m33.0317548s
	I0416 17:23:37.525812   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 17:23:39.476349   12952 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:23:39.476430   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:23:39.476430   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:23:41.740500   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225
	
	I0416 17:23:41.740500   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:23:41.743574   12952 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 17:23:41.743720   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 17:23:41.751403   12952 ssh_runner.go:195] Run: systemctl --version
	I0416 17:23:41.751403   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 17:23:43.698069   12952 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:23:43.698069   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:23:43.698069   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:23:43.700426   12952 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:23:43.700426   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:23:43.700426   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:23:46.071521   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225
	
	I0416 17:23:46.071549   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:23:46.071877   12952 sshutil.go:53] new ssh client: &{IP:172.19.91.225 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 17:23:46.096240   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225
	
	I0416 17:23:46.096695   12952 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:23:46.097054   12952 sshutil.go:53] new ssh client: &{IP:172.19.91.225 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 17:23:46.166475   12952 ssh_runner.go:235] Completed: systemctl --version: (4.4147312s)
	I0416 17:23:46.175207   12952 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0416 17:23:46.298579   12952 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5546653s)
	W0416 17:23:46.298579   12952 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 17:23:46.308210   12952 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 17:23:46.343022   12952 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 17:23:46.343022   12952 start.go:494] detecting cgroup driver to use...
	I0416 17:23:46.343022   12952 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 17:23:46.400195   12952 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 17:23:46.429885   12952 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 17:23:46.448520   12952 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 17:23:46.459047   12952 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 17:23:46.487882   12952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 17:23:46.517461   12952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 17:23:46.547639   12952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 17:23:46.578688   12952 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 17:23:46.610246   12952 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 17:23:46.639658   12952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 17:23:46.668682   12952 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 17:23:46.699014   12952 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 17:23:46.729060   12952 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 17:23:46.760774   12952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:23:46.958333   12952 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 17:23:47.002528   12952 start.go:494] detecting cgroup driver to use...
	I0416 17:23:47.013054   12952 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 17:23:47.044104   12952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 17:23:47.075120   12952 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 17:23:47.109181   12952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 17:23:47.141278   12952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 17:23:47.170582   12952 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 17:23:47.221764   12952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 17:23:47.244028   12952 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 17:23:47.284737   12952 ssh_runner.go:195] Run: which cri-dockerd
	I0416 17:23:47.298174   12952 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 17:23:47.315803   12952 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 17:23:47.355299   12952 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 17:23:47.537501   12952 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 17:23:47.703732   12952 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 17:23:47.703950   12952 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 17:23:47.746808   12952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:23:47.923132   12952 ssh_runner.go:195] Run: sudo systemctl restart docker

                                                
                                                
** /stderr **
ha_test.go:422: W0416 17:22:02.497169   12952 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0416 17:22:02.552887   12952 out.go:291] Setting OutFile to fd 604 ...
I0416 17:22:02.571584   12952 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 17:22:02.571669   12952 out.go:304] Setting ErrFile to fd 960...
I0416 17:22:02.571669   12952 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 17:22:02.585144   12952 mustload.go:65] Loading cluster: ha-022600
I0416 17:22:02.586046   12952 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0416 17:22:02.586046   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
I0416 17:22:04.482964   12952 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0416 17:22:04.482964   12952 main.go:141] libmachine: [stderr =====>] : 
W0416 17:22:04.482964   12952 host.go:58] "ha-022600-m02" host status: Stopped
I0416 17:22:04.484265   12952 out.go:177] * Starting "ha-022600-m02" control-plane node in "ha-022600" cluster
I0416 17:22:04.484988   12952 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
I0416 17:22:04.485176   12952 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
I0416 17:22:04.485176   12952 cache.go:56] Caching tarball of preloaded images
I0416 17:22:04.485689   12952 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0416 17:22:04.485880   12952 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
I0416 17:22:04.485930   12952 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
I0416 17:22:04.487872   12952 start.go:360] acquireMachinesLock for ha-022600-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0416 17:22:04.488399   12952 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-022600-m02"
I0416 17:22:04.488481   12952 start.go:96] Skipping create...Using existing machine configuration
I0416 17:22:04.488481   12952 fix.go:54] fixHost starting: m02
I0416 17:22:04.489083   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
I0416 17:22:06.477091   12952 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0416 17:22:06.478011   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:22:06.478011   12952 fix.go:112] recreateIfNeeded on ha-022600-m02: state=Stopped err=<nil>
W0416 17:22:06.478011   12952 fix.go:138] unexpected machine state, will restart: <nil>
I0416 17:22:06.478751   12952 out.go:177] * Restarting existing hyperv VM for "ha-022600-m02" ...
I0416 17:22:06.479382   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-022600-m02
I0416 17:22:09.092077   12952 main.go:141] libmachine: [stdout =====>] : 
I0416 17:22:09.092077   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:22:09.092077   12952 main.go:141] libmachine: Waiting for host to start...
I0416 17:22:09.092077   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
I0416 17:22:11.154535   12952 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 17:22:11.154535   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:22:11.154705   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
I0416 17:22:13.479800   12952 main.go:141] libmachine: [stdout =====>] : 
I0416 17:22:13.479800   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:22:14.489994   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
I0416 17:22:16.435974   12952 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 17:22:16.435974   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:22:16.435974   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
I0416 17:22:18.733476   12952 main.go:141] libmachine: [stdout =====>] : 
I0416 17:22:18.733550   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:22:19.747790   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
I0416 17:22:21.758072   12952 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 17:22:21.758832   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:22:21.758832   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
I0416 17:22:23.989153   12952 main.go:141] libmachine: [stdout =====>] : 
I0416 17:22:23.989153   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:22:25.004951   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
I0416 17:22:26.964462   12952 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 17:22:26.964984   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:22:26.965086   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
I0416 17:22:29.206226   12952 main.go:141] libmachine: [stdout =====>] : 
I0416 17:22:29.206296   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:22:30.207350   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
I0416 17:22:32.189656   12952 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 17:22:32.189656   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:22:32.190691   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
I0416 17:22:34.510965   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225

                                                
                                                
I0416 17:22:34.510965   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:22:34.513223   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
I0416 17:22:36.395019   12952 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 17:22:36.395019   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:22:36.396066   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
I0416 17:22:38.647912   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225

                                                
                                                
I0416 17:22:38.647912   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:22:38.647912   12952 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
I0416 17:22:38.649804   12952 machine.go:94] provisionDockerMachine start ...
I0416 17:22:38.649890   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
I0416 17:22:40.529171   12952 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 17:22:40.529171   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:22:40.529243   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
I0416 17:22:42.806051   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225

                                                
                                                
I0416 17:22:42.806051   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:22:42.809900   12952 main.go:141] libmachine: Using SSH client type: native
I0416 17:22:42.809970   12952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.225 22 <nil> <nil>}
I0416 17:22:42.809970   12952 main.go:141] libmachine: About to run SSH command:
hostname
I0416 17:22:42.939760   12952 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube

                                                
                                                
I0416 17:22:42.939760   12952 buildroot.go:166] provisioning hostname "ha-022600-m02"
I0416 17:22:42.939760   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
I0416 17:22:44.806278   12952 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 17:22:44.806341   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:22:44.806341   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
I0416 17:22:47.112671   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225

                                                
                                                
I0416 17:22:47.112671   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:22:47.116770   12952 main.go:141] libmachine: Using SSH client type: native
I0416 17:22:47.116770   12952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.225 22 <nil> <nil>}
I0416 17:22:47.116770   12952 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-022600-m02 && echo "ha-022600-m02" | sudo tee /etc/hostname
I0416 17:22:47.280612   12952 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-022600-m02

                                                
                                                
I0416 17:22:47.280690   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
I0416 17:22:49.237516   12952 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 17:22:49.237516   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:22:49.237605   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
I0416 17:22:51.530099   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225

                                                
                                                
I0416 17:22:51.531022   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:22:51.534532   12952 main.go:141] libmachine: Using SSH client type: native
I0416 17:22:51.535070   12952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.225 22 <nil> <nil>}
I0416 17:22:51.535176   12952 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\sha-022600-m02' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-022600-m02/g' /etc/hosts;
			else 
				echo '127.0.1.1 ha-022600-m02' | sudo tee -a /etc/hosts; 
			fi
		fi
I0416 17:22:51.688257   12952 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0416 17:22:51.688257   12952 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
I0416 17:22:51.688257   12952 buildroot.go:174] setting up certificates
I0416 17:22:51.688257   12952 provision.go:84] configureAuth start
I0416 17:22:51.688257   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
I0416 17:22:53.665813   12952 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 17:22:53.666164   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:22:53.666164   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
I0416 17:22:55.971227   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225

                                                
                                                
I0416 17:22:55.971227   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:22:55.971317   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
I0416 17:22:57.881681   12952 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 17:22:57.881681   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:22:57.882441   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
I0416 17:23:00.198205   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225

                                                
                                                
I0416 17:23:00.198233   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:23:00.198233   12952 provision.go:143] copyHostCerts
I0416 17:23:00.198441   12952 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
I0416 17:23:00.198650   12952 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
I0416 17:23:00.198650   12952 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
I0416 17:23:00.199007   12952 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
I0416 17:23:00.199933   12952 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
I0416 17:23:00.200146   12952 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
I0416 17:23:00.200146   12952 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
I0416 17:23:00.200146   12952 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
I0416 17:23:00.201128   12952 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
I0416 17:23:00.201296   12952 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
I0416 17:23:00.201365   12952 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
I0416 17:23:00.201588   12952 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
I0416 17:23:00.202400   12952 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-022600-m02 san=[127.0.0.1 172.19.91.225 ha-022600-m02 localhost minikube]
I0416 17:23:00.337954   12952 provision.go:177] copyRemoteCerts
I0416 17:23:00.347364   12952 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0416 17:23:00.347364   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
I0416 17:23:02.228680   12952 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 17:23:02.228680   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:23:02.229056   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
I0416 17:23:04.482911   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225

                                                
                                                
I0416 17:23:04.484012   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:23:04.484399   12952 sshutil.go:53] new ssh client: &{IP:172.19.91.225 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
I0416 17:23:04.599886   12952 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.252281s)
I0416 17:23:04.599886   12952 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
I0416 17:23:04.599886   12952 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0416 17:23:04.647886   12952 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
I0416 17:23:04.649570   12952 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
I0416 17:23:04.701062   12952 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
I0416 17:23:04.701576   12952 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0416 17:23:04.747967   12952 provision.go:87] duration metric: took 13.0589703s to configureAuth
I0416 17:23:04.747967   12952 buildroot.go:189] setting minikube options for container-runtime
I0416 17:23:04.748574   12952 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0416 17:23:04.748574   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
I0416 17:23:06.729225   12952 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 17:23:06.729566   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:23:06.729566   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
I0416 17:23:09.066597   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225

                                                
                                                
I0416 17:23:09.067003   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:23:09.071409   12952 main.go:141] libmachine: Using SSH client type: native
I0416 17:23:09.071938   12952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.225 22 <nil> <nil>}
I0416 17:23:09.071938   12952 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0416 17:23:09.215715   12952 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs

                                                
                                                
I0416 17:23:09.215715   12952 buildroot.go:70] root file system type: tmpfs
I0416 17:23:09.216244   12952 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0416 17:23:09.216244   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
I0416 17:23:11.207418   12952 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 17:23:11.207418   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:23:11.207489   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
I0416 17:23:13.562836   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225

                                                
                                                
I0416 17:23:13.562836   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:23:13.567717   12952 main.go:141] libmachine: Using SSH client type: native
I0416 17:23:13.568140   12952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.225 22 <nil> <nil>}
I0416 17:23:13.568140   12952 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0416 17:23:13.721335   12952 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I0416 17:23:13.721434   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
I0416 17:23:15.639482   12952 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 17:23:15.639482   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:23:15.639589   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
I0416 17:23:17.935283   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225

                                                
                                                
I0416 17:23:17.935283   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:23:17.939268   12952 main.go:141] libmachine: Using SSH client type: native
I0416 17:23:17.939268   12952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.225 22 <nil> <nil>}
I0416 17:23:17.939786   12952 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0416 17:23:19.989156   12952 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

                                                
                                                
I0416 17:23:19.989156   12952 machine.go:97] duration metric: took 41.3370085s to provisionDockerMachine
I0416 17:23:19.989156   12952 start.go:293] postStartSetup for "ha-022600-m02" (driver="hyperv")
I0416 17:23:19.989156   12952 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0416 17:23:19.997733   12952 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0416 17:23:19.997733   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
I0416 17:23:21.928360   12952 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 17:23:21.928360   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:23:21.928360   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
I0416 17:23:24.194079   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225

                                                
                                                
I0416 17:23:24.194079   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:23:24.194354   12952 sshutil.go:53] new ssh client: &{IP:172.19.91.225 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
I0416 17:23:24.312621   12952 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3146434s)
I0416 17:23:24.320868   12952 ssh_runner.go:195] Run: cat /etc/os-release
I0416 17:23:24.327299   12952 info.go:137] Remote host: Buildroot 2023.02.9
I0416 17:23:24.327383   12952 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
I0416 17:23:24.327777   12952 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
I0416 17:23:24.328332   12952 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
I0416 17:23:24.328332   12952 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
I0416 17:23:24.336760   12952 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0416 17:23:24.354914   12952 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
I0416 17:23:24.397854   12952 start.go:296] duration metric: took 4.408448s for postStartSetup
I0416 17:23:24.405659   12952 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
I0416 17:23:24.405659   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
I0416 17:23:26.330995   12952 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 17:23:26.332002   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:23:26.332090   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
I0416 17:23:28.606847   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225

                                                
                                                
I0416 17:23:28.606847   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:23:28.607193   12952 sshutil.go:53] new ssh client: &{IP:172.19.91.225 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
I0416 17:23:28.714198   12952 ssh_runner.go:235] Completed: sudo ls --almost-all -1 /var/lib/minikube/backup: (4.3082949s)
I0416 17:23:28.714307   12952 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
I0416 17:23:28.721895   12952 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
I0416 17:23:28.788082   12952 fix.go:56] duration metric: took 1m24.2948214s for fixHost
I0416 17:23:28.788082   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
I0416 17:23:30.706837   12952 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 17:23:30.706877   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:23:30.707047   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
I0416 17:23:32.989083   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225

                                                
                                                
I0416 17:23:32.989083   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:23:32.993311   12952 main.go:141] libmachine: Using SSH client type: native
I0416 17:23:32.993390   12952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.225 22 <nil> <nil>}
I0416 17:23:32.993390   12952 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0416 17:23:33.131848   12952 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713288213.295549337

                                                
                                                
I0416 17:23:33.131954   12952 fix.go:216] guest clock: 1713288213.295549337
I0416 17:23:33.131954   12952 fix.go:229] Guest: 2024-04-16 17:23:33.295549337 +0000 UTC Remote: 2024-04-16 17:23:28.7880828 +0000 UTC m=+86.374524701 (delta=4.507466537s)
I0416 17:23:33.131954   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
I0416 17:23:35.091294   12952 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 17:23:35.091294   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:23:35.091373   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
I0416 17:23:37.366077   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225

                                                
                                                
I0416 17:23:37.366156   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:23:37.370095   12952 main.go:141] libmachine: Using SSH client type: native
I0416 17:23:37.370492   12952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.225 22 <nil> <nil>}
I0416 17:23:37.370492   12952 main.go:141] libmachine: About to run SSH command:
sudo date -s @1713288213
I0416 17:23:37.525429   12952 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 17:23:33 UTC 2024

                                                
                                                
I0416 17:23:37.525429   12952 fix.go:236] clock set: Tue Apr 16 17:23:33 UTC 2024
(err=<nil>)
I0416 17:23:37.525429   12952 start.go:83] releasing machines lock for "ha-022600-m02", held for 1m33.0317548s
I0416 17:23:37.525812   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
I0416 17:23:39.476349   12952 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 17:23:39.476430   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:23:39.476430   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
I0416 17:23:41.740500   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225

                                                
                                                
I0416 17:23:41.740500   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:23:41.743574   12952 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0416 17:23:41.743720   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
I0416 17:23:41.751403   12952 ssh_runner.go:195] Run: systemctl --version
I0416 17:23:41.751403   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
I0416 17:23:43.698069   12952 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 17:23:43.698069   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:23:43.698069   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
I0416 17:23:43.700426   12952 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 17:23:43.700426   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:23:43.700426   12952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
I0416 17:23:46.071521   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225

                                                
                                                
I0416 17:23:46.071549   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:23:46.071877   12952 sshutil.go:53] new ssh client: &{IP:172.19.91.225 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
I0416 17:23:46.096240   12952 main.go:141] libmachine: [stdout =====>] : 172.19.91.225

                                                
                                                
I0416 17:23:46.096695   12952 main.go:141] libmachine: [stderr =====>] : 
I0416 17:23:46.097054   12952 sshutil.go:53] new ssh client: &{IP:172.19.91.225 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
I0416 17:23:46.166475   12952 ssh_runner.go:235] Completed: systemctl --version: (4.4147312s)
I0416 17:23:46.175207   12952 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0416 17:23:46.298579   12952 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5546653s)
W0416 17:23:46.298579   12952 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0416 17:23:46.308210   12952 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0416 17:23:46.343022   12952 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0416 17:23:46.343022   12952 start.go:494] detecting cgroup driver to use...
I0416 17:23:46.343022   12952 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0416 17:23:46.400195   12952 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0416 17:23:46.429885   12952 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0416 17:23:46.448520   12952 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0416 17:23:46.459047   12952 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0416 17:23:46.487882   12952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0416 17:23:46.517461   12952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0416 17:23:46.547639   12952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0416 17:23:46.578688   12952 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0416 17:23:46.610246   12952 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0416 17:23:46.639658   12952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0416 17:23:46.668682   12952 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0416 17:23:46.699014   12952 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0416 17:23:46.729060   12952 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0416 17:23:46.760774   12952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0416 17:23:46.958333   12952 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0416 17:23:47.002528   12952 start.go:494] detecting cgroup driver to use...
I0416 17:23:47.013054   12952 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0416 17:23:47.044104   12952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0416 17:23:47.075120   12952 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0416 17:23:47.109181   12952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0416 17:23:47.141278   12952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0416 17:23:47.170582   12952 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0416 17:23:47.221764   12952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0416 17:23:47.244028   12952 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0416 17:23:47.284737   12952 ssh_runner.go:195] Run: which cri-dockerd
I0416 17:23:47.298174   12952 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0416 17:23:47.315803   12952 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0416 17:23:47.355299   12952 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0416 17:23:47.537501   12952 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0416 17:23:47.703732   12952 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0416 17:23:47.703950   12952 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0416 17:23:47.746808   12952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0416 17:23:47.923132   12952 ssh_runner.go:195] Run: sudo systemctl restart docker
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-windows-amd64.exe -p ha-022600 node start m02 -v=7 --alsologtostderr": exit status 1
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-022600 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-022600 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-022600 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-022600 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-022600 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-022600 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-022600 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-022600 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-022600 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-022600 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-022600 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-022600 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-022600 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-022600 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-022600 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-022600 status -v=7 --alsologtostderr: context deadline exceeded (237.5µs)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-022600 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-022600 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:432: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-022600 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-022600 -n ha-022600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-022600 -n ha-022600: (10.8796551s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-022600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-022600 logs -n 25: (7.4587132s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:10 UTC | 16 Apr 24 17:10 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:11 UTC | 16 Apr 24 17:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:11 UTC | 16 Apr 24 17:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:11 UTC | 16 Apr 24 17:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:11 UTC | 16 Apr 24 17:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-gph6r --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-mnl84 --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | busybox-7fdf7869d9-rpfpf --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-gph6r --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-mnl84 --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | busybox-7fdf7869d9-rpfpf --          |           |                   |                |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-gph6r -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC |                     |
	|         | busybox-7fdf7869d9-mnl84 -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:12 UTC | 16 Apr 24 17:12 UTC |
	|         | busybox-7fdf7869d9-rpfpf -- nslookup |           |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- get pods -o          | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC | 16 Apr 24 17:13 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC |                     |
	|         | busybox-7fdf7869d9-gph6r             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC |                     |
	|         | busybox-7fdf7869d9-mnl84             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC | 16 Apr 24 17:13 UTC |
	|         | busybox-7fdf7869d9-rpfpf             |           |                   |                |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |                |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |                |                     |                     |
	| kubectl | -p ha-022600 -- exec                 | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC |                     |
	|         | busybox-7fdf7869d9-rpfpf -- sh       |           |                   |                |                     |                     |
	|         | -c ping -c 1 172.19.80.1             |           |                   |                |                     |                     |
	| node    | add -p ha-022600 -v=7                | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:13 UTC | 16 Apr 24 17:16 UTC |
	|         | --alsologtostderr                    |           |                   |                |                     |                     |
	| node    | ha-022600 node stop m02 -v=7         | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:19 UTC | 16 Apr 24 17:20 UTC |
	|         | --alsologtostderr                    |           |                   |                |                     |                     |
	| node    | ha-022600 node start m02 -v=7        | ha-022600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:22 UTC |                     |
	|         | --alsologtostderr                    |           |                   |                |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 16:53:50
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 16:53:50.116950   12816 out.go:291] Setting OutFile to fd 784 ...
	I0416 16:53:50.117952   12816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:53:50.117952   12816 out.go:304] Setting ErrFile to fd 696...
	I0416 16:53:50.117952   12816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:53:50.138920   12816 out.go:298] Setting JSON to false
	I0416 16:53:50.141501   12816 start.go:129] hostinfo: {"hostname":"minikube5","uptime":24059,"bootTime":1713262370,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0416 16:53:50.141501   12816 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0416 16:53:50.143700   12816 out.go:177] * [ha-022600] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0416 16:53:50.144387   12816 notify.go:220] Checking for updates...
	I0416 16:53:50.144982   12816 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:53:50.145881   12816 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 16:53:50.146373   12816 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0416 16:53:50.146987   12816 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 16:53:50.147788   12816 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 16:53:50.149250   12816 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 16:53:54.959514   12816 out.go:177] * Using the hyperv driver based on user configuration
	I0416 16:53:54.959811   12816 start.go:297] selected driver: hyperv
	I0416 16:53:54.959811   12816 start.go:901] validating driver "hyperv" against <nil>
	I0416 16:53:54.959811   12816 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 16:53:55.002641   12816 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 16:53:55.003374   12816 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 16:53:55.003816   12816 cni.go:84] Creating CNI manager for ""
	I0416 16:53:55.003816   12816 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0416 16:53:55.003816   12816 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0416 16:53:55.003816   12816 start.go:340] cluster config:
	{Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:53:55.003816   12816 iso.go:125] acquiring lock: {Name:mka3f8eef32f5becd06d05d1d837c2a92a8fa70c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 16:53:55.005700   12816 out.go:177] * Starting "ha-022600" primary control-plane node in "ha-022600" cluster
	I0416 16:53:55.006053   12816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:53:55.006397   12816 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0416 16:53:55.006397   12816 cache.go:56] Caching tarball of preloaded images
	I0416 16:53:55.006539   12816 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 16:53:55.006809   12816 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 16:53:55.007075   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:53:55.007821   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json: {Name:mkc2f9747189bfa0db5ea21e93e1afafc0e89eb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:53:55.008149   12816 start.go:360] acquireMachinesLock for ha-022600: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:53:55.009151   12816 start.go:364] duration metric: took 1.0024ms to acquireMachinesLock for "ha-022600"
	I0416 16:53:55.009151   12816 start.go:93] Provisioning new machine with config: &{Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 16:53:55.009151   12816 start.go:125] createHost starting for "" (driver="hyperv")
	I0416 16:53:55.010175   12816 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 16:53:55.010397   12816 start.go:159] libmachine.API.Create for "ha-022600" (driver="hyperv")
	I0416 16:53:55.010397   12816 client.go:168] LocalClient.Create starting
	I0416 16:53:55.010740   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0416 16:53:55.011023   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:53:55.011023   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:53:55.011200   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0416 16:53:55.011403   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:53:55.011403   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:53:55.011541   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0416 16:53:56.852843   12816 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0416 16:53:56.852843   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:53:56.853713   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0416 16:53:58.346838   12816 main.go:141] libmachine: [stdout =====>] : False
	
	I0416 16:53:58.346838   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:53:58.347399   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:53:59.667129   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:53:59.667129   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:53:59.667644   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:54:02.789332   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:54:02.789332   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:02.791736   12816 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 16:54:03.131710   12816 main.go:141] libmachine: Creating SSH key...
	I0416 16:54:03.273248   12816 main.go:141] libmachine: Creating VM...
	I0416 16:54:03.273248   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:54:05.824835   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:54:05.824835   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:05.824937   12816 main.go:141] libmachine: Using switch "Default Switch"
	I0416 16:54:05.825022   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:54:07.398351   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:54:07.398635   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:07.398635   12816 main.go:141] libmachine: Creating VHD
	I0416 16:54:07.398733   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0416 16:54:10.982944   12816 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : E9EB5342-E929-43B6-8B97-D7BDD354CEE1
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0416 16:54:10.983213   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:10.983213   12816 main.go:141] libmachine: Writing magic tar header
	I0416 16:54:10.983213   12816 main.go:141] libmachine: Writing SSH key tar header
	I0416 16:54:10.992883   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0416 16:54:13.950584   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:13.950584   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:13.950584   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\disk.vhd' -SizeBytes 20000MB
	I0416 16:54:16.287736   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:16.287736   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:16.287736   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-022600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0416 16:54:19.439740   12816 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-022600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0416 16:54:19.439740   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:19.439740   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-022600 -DynamicMemoryEnabled $false
	I0416 16:54:21.396684   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:21.396684   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:21.397696   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-022600 -Count 2
	I0416 16:54:23.301369   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:23.301369   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:23.302296   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-022600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\boot2docker.iso'
	I0416 16:54:25.540957   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:25.540957   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:25.541060   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-022600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\disk.vhd'
	I0416 16:54:27.919093   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:27.919302   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:27.919302   12816 main.go:141] libmachine: Starting VM...
	I0416 16:54:27.919462   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-022600
	I0416 16:54:30.480037   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:30.480279   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:30.480279   12816 main.go:141] libmachine: Waiting for host to start...
	I0416 16:54:30.480279   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:32.483346   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:32.484152   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:32.484152   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:34.785082   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:34.785271   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:35.799483   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:37.788691   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:37.788691   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:37.788898   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:40.058231   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:40.058742   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:41.064074   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:43.063862   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:43.063862   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:43.063862   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:45.301253   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:45.301420   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:46.309647   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:48.337653   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:48.337653   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:48.337653   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:50.614494   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:54:50.615195   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:51.620909   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:53.639317   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:53.640351   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:53.640405   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:54:55.942630   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:54:55.943393   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:55.943471   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:57.836545   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:57.836545   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:57.837395   12816 machine.go:94] provisionDockerMachine start ...
	I0416 16:54:57.837474   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:54:59.762683   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:54:59.763360   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:54:59.763440   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:02.003751   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:02.003751   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:02.010689   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:02.023158   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:02.023158   12816 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 16:55:02.152140   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 16:55:02.152244   12816 buildroot.go:166] provisioning hostname "ha-022600"
	I0416 16:55:02.152322   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:03.956913   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:03.956913   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:03.957618   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:06.305236   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:06.305236   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:06.309822   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:06.310484   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:06.310484   12816 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-022600 && echo "ha-022600" | sudo tee /etc/hostname
	I0416 16:55:06.479074   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-022600
	
	I0416 16:55:06.479182   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:08.433073   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:08.433999   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:08.433999   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:10.792893   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:10.792893   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:10.796713   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:10.797321   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:10.797321   12816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-022600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-022600/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-022600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:55:10.944702   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:55:10.944870   12816 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 16:55:10.944983   12816 buildroot.go:174] setting up certificates
	I0416 16:55:10.944983   12816 provision.go:84] configureAuth start
	I0416 16:55:10.945092   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:12.932736   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:12.932736   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:12.933614   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:15.203758   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:15.203758   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:15.203758   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:17.088226   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:17.088226   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:17.088334   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:19.325791   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:19.326294   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:19.326294   12816 provision.go:143] copyHostCerts
	I0416 16:55:19.326294   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 16:55:19.326294   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 16:55:19.326294   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 16:55:19.326900   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 16:55:19.328097   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 16:55:19.328097   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 16:55:19.328097   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 16:55:19.328097   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 16:55:19.329417   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 16:55:19.329417   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 16:55:19.329417   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 16:55:19.330063   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 16:55:19.330726   12816 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-022600 san=[127.0.0.1 172.19.81.207 ha-022600 localhost minikube]
	I0416 16:55:19.539117   12816 provision.go:177] copyRemoteCerts
	I0416 16:55:19.547114   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:55:19.547114   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:21.440985   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:21.440985   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:21.440985   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:23.726564   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:23.726564   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:23.727019   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:55:23.834423   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.287066s)
	I0416 16:55:23.834577   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 16:55:23.835008   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 16:55:23.874966   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 16:55:23.875470   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0416 16:55:23.923921   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 16:55:23.923921   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 16:55:23.965042   12816 provision.go:87] duration metric: took 13.0192422s to configureAuth
	I0416 16:55:23.965042   12816 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:55:23.965741   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:55:23.965827   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:25.905339   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:25.905903   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:25.905986   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:28.170079   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:28.170419   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:28.173356   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:28.173937   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:28.173937   12816 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 16:55:28.301727   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 16:55:28.301727   12816 buildroot.go:70] root file system type: tmpfs
	I0416 16:55:28.302425   12816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 16:55:28.302506   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:30.181808   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:30.181808   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:30.181889   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:32.394860   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:32.394860   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:32.398667   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:32.399299   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:32.399475   12816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 16:55:32.556658   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 16:55:32.556887   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:34.446928   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:34.446969   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:34.447053   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:36.709442   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:36.710242   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:36.714111   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:36.714437   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:36.714437   12816 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 16:55:38.655929   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 16:55:38.655929   12816 machine.go:97] duration metric: took 40.8162201s to provisionDockerMachine
	I0416 16:55:38.656036   12816 client.go:171] duration metric: took 1m43.6397622s to LocalClient.Create
	I0416 16:55:38.656036   12816 start.go:167] duration metric: took 1m43.6397622s to libmachine.API.Create "ha-022600"
	I0416 16:55:38.656036   12816 start.go:293] postStartSetup for "ha-022600" (driver="hyperv")
	I0416 16:55:38.656036   12816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:55:38.665072   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:55:38.665072   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:40.514910   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:40.514910   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:40.515910   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:42.764754   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:42.765404   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:42.765404   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:55:42.879399   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2140881s)
	I0416 16:55:42.892410   12816 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:55:42.899117   12816 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:55:42.899117   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 16:55:42.899734   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 16:55:42.901086   12816 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 16:55:42.901154   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 16:55:42.911237   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 16:55:42.927664   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 16:55:42.975440   12816 start.go:296] duration metric: took 4.3191592s for postStartSetup
	I0416 16:55:42.977201   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:44.830945   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:44.830945   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:44.831562   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:47.134349   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:47.134788   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:47.135000   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:55:47.137270   12816 start.go:128] duration metric: took 1m52.1217609s to createHost
	I0416 16:55:47.137270   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:49.024055   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:49.024055   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:49.024657   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:51.238446   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:51.238526   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:51.242455   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:51.243052   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:51.243052   12816 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 16:55:51.369469   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713286551.540248133
	
	I0416 16:55:51.369575   12816 fix.go:216] guest clock: 1713286551.540248133
	I0416 16:55:51.369575   12816 fix.go:229] Guest: 2024-04-16 16:55:51.540248133 +0000 UTC Remote: 2024-04-16 16:55:47.1372703 +0000 UTC m=+117.146546101 (delta=4.402977833s)
	I0416 16:55:51.369790   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:53.407581   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:53.407727   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:53.407727   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:55.663769   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:55.663769   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:55.667543   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:55:55.667688   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.81.207 22 <nil> <nil>}
	I0416 16:55:55.667688   12816 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713286551
	I0416 16:55:55.810591   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 16:55:51 UTC 2024
	
	I0416 16:55:55.810700   12816 fix.go:236] clock set: Tue Apr 16 16:55:51 UTC 2024
	 (err=<nil>)
	I0416 16:55:55.810700   12816 start.go:83] releasing machines lock for "ha-022600", held for 2m0.7946995s
	I0416 16:55:55.810965   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:57.710878   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:55:57.710878   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:57.711672   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:55:59.985139   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:55:59.985210   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:55:59.988730   12816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:55:59.988803   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:55:59.998550   12816 ssh_runner.go:195] Run: cat /version.json
	I0416 16:55:59.998550   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:01.993954   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:01.993954   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:01.993954   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:01.995788   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:01.995959   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:01.996084   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:04.379274   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:56:04.379356   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:04.379701   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:56:04.391360   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:56:04.392161   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:04.392520   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:56:04.469159   12816 ssh_runner.go:235] Completed: cat /version.json: (4.4703555s)
	I0416 16:56:04.479363   12816 ssh_runner.go:195] Run: systemctl --version
	I0416 16:56:04.584079   12816 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5950892s)
	I0416 16:56:04.593130   12816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 16:56:04.602217   12816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:56:04.610705   12816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:56:04.639084   12816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 16:56:04.639119   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:56:04.639119   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:56:04.684127   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 16:56:04.713899   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 16:56:04.734297   12816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 16:56:04.745020   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 16:56:04.776657   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:56:04.806087   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 16:56:04.854166   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:56:04.890388   12816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:56:04.918140   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 16:56:04.946595   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 16:56:04.975408   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 16:56:05.001633   12816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:56:05.028505   12816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:56:05.053299   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:05.230466   12816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 16:56:05.260161   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:56:05.269988   12816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 16:56:05.302694   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:56:05.335619   12816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 16:56:05.368663   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:56:05.402792   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:56:05.435612   12816 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 16:56:05.483431   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:56:05.505797   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:56:05.548843   12816 ssh_runner.go:195] Run: which cri-dockerd
	I0416 16:56:05.563980   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 16:56:05.582552   12816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 16:56:05.624048   12816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 16:56:05.804495   12816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 16:56:05.984936   12816 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 16:56:05.985183   12816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 16:56:06.032244   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:06.217075   12816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 16:56:08.662995   12816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4457805s)
	I0416 16:56:08.670977   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0416 16:56:08.701542   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 16:56:08.730698   12816 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0416 16:56:08.941813   12816 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0416 16:56:09.145939   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:09.331138   12816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0416 16:56:09.370232   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 16:56:09.409657   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:09.615575   12816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0416 16:56:09.726879   12816 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0416 16:56:09.737760   12816 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0416 16:56:09.746450   12816 start.go:562] Will wait 60s for crictl version
	I0416 16:56:09.755840   12816 ssh_runner.go:195] Run: which crictl
	I0416 16:56:09.771470   12816 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 16:56:09.827603   12816 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0416 16:56:09.836477   12816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 16:56:09.874967   12816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 16:56:09.907967   12816 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
	I0416 16:56:09.908249   12816 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0416 16:56:09.913888   12816 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:6f:a4 Flags:up|broadcast|multicast|running}
	I0416 16:56:09.917049   12816 ip.go:210] interface addr: fe80::6b96:eca7:5afa:def5/64
	I0416 16:56:09.917049   12816 ip.go:210] interface addr: 172.19.80.1/20
	I0416 16:56:09.924842   12816 ssh_runner.go:195] Run: grep 172.19.80.1	host.minikube.internal$ /etc/hosts
	I0416 16:56:09.931830   12816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:56:09.968931   12816 kubeadm.go:877] updating cluster {Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 16:56:09.968931   12816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:56:09.975955   12816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 16:56:09.996899   12816 docker.go:685] Got preloaded images: 
	I0416 16:56:09.996899   12816 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0416 16:56:10.008276   12816 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0416 16:56:10.035609   12816 ssh_runner.go:195] Run: which lz4
	I0416 16:56:10.042582   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0416 16:56:10.050849   12816 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 16:56:10.058074   12816 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 16:56:10.058074   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0416 16:56:11.721910   12816 docker.go:649] duration metric: took 1.6789563s to copy over tarball
	I0416 16:56:11.731181   12816 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 16:56:20.333529   12816 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.60186s)
	I0416 16:56:20.333529   12816 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 16:56:20.400516   12816 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0416 16:56:20.419486   12816 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0416 16:56:20.469018   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:20.655543   12816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 16:56:23.229259   12816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5734984s)
	I0416 16:56:23.240705   12816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 16:56:23.262332   12816 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0416 16:56:23.262383   12816 cache_images.go:84] Images are preloaded, skipping loading
	I0416 16:56:23.262383   12816 kubeadm.go:928] updating node { 172.19.81.207 8443 v1.29.3 docker true true} ...
	I0416 16:56:23.262383   12816 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-022600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.81.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 16:56:23.270008   12816 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0416 16:56:23.307277   12816 cni.go:84] Creating CNI manager for ""
	I0416 16:56:23.307277   12816 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0416 16:56:23.307362   12816 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 16:56:23.307406   12816 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.81.207 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-022600 NodeName:ha-022600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.81.207"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.81.207 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 16:56:23.307691   12816 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.81.207
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-022600"
	  kubeletExtraArgs:
	    node-ip: 172.19.81.207
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.81.207"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 16:56:23.307749   12816 kube-vip.go:111] generating kube-vip config ...
	I0416 16:56:23.318492   12816 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0416 16:56:23.343950   12816 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0416 16:56:23.344258   12816 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.95.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0416 16:56:23.353585   12816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 16:56:23.370542   12816 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 16:56:23.379813   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0416 16:56:23.397865   12816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0416 16:56:23.432291   12816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 16:56:23.462868   12816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0416 16:56:23.492579   12816 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1351 bytes)
	I0416 16:56:23.534977   12816 ssh_runner.go:195] Run: grep 172.19.95.254	control-plane.minikube.internal$ /etc/hosts
	I0416 16:56:23.542734   12816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.95.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:56:23.575719   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:56:23.754395   12816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 16:56:23.781462   12816 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600 for IP: 172.19.81.207
	I0416 16:56:23.781462   12816 certs.go:194] generating shared ca certs ...
	I0416 16:56:23.781462   12816 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:23.782411   12816 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0416 16:56:23.782411   12816 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0416 16:56:23.783651   12816 certs.go:256] generating profile certs ...
	I0416 16:56:23.784402   12816 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.key
	I0416 16:56:23.784569   12816 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.crt with IP's: []
	I0416 16:56:23.984047   12816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.crt ...
	I0416 16:56:23.984047   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.crt: {Name:mk3ebdcb7f076a09a399313f7ed3edf14403a6a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:23.985977   12816 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.key ...
	I0416 16:56:23.985977   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\client.key: {Name:mk94343a485b04f4b25a0ccd3245e197e7ecbec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:23.986215   12816 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648
	I0416 16:56:23.987265   12816 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.81.207 172.19.95.254]
	I0416 16:56:24.317716   12816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648 ...
	I0416 16:56:24.317716   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648: {Name:mk30f7000427979a1bcf8d6fc3995d1f7ccc170c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.319797   12816 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648 ...
	I0416 16:56:24.319797   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648: {Name:mk95e9e3e0f96031ef005f6c36470c216303a111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.320163   12816 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt.ff599648 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt
	I0416 16:56:24.331288   12816 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key.ff599648 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key
	I0416 16:56:24.332214   12816 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key
	I0416 16:56:24.332214   12816 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt with IP's: []
	I0416 16:56:24.406574   12816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt ...
	I0416 16:56:24.406574   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt: {Name:mk73158a02cd8861e90a3b76d50704b360c358ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.407013   12816 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key ...
	I0416 16:56:24.407013   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key: {Name:mk6842e2af8fadaf278ec7592edd5bd96f07c8ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:24.408078   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 16:56:24.408945   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0416 16:56:24.409148   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0416 16:56:24.409732   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0416 16:56:24.417870   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0416 16:56:24.418145   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem (1338 bytes)
	W0416 16:56:24.418533   12816 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324_empty.pem, impossibly tiny 0 bytes
	I0416 16:56:24.418533   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0416 16:56:24.418811   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0416 16:56:24.418990   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0416 16:56:24.419161   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0416 16:56:24.419368   12816 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem (1708 bytes)
	I0416 16:56:24.419647   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /usr/share/ca-certificates/53242.pem
	I0416 16:56:24.419767   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:24.419867   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem -> /usr/share/ca-certificates/5324.pem
	I0416 16:56:24.420003   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 16:56:24.466985   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 16:56:24.509816   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 16:56:24.554817   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 16:56:24.603006   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0416 16:56:24.646596   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 16:56:24.694120   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 16:56:24.741669   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 16:56:24.785888   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /usr/share/ca-certificates/53242.pem (1708 bytes)
	I0416 16:56:24.829403   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 16:56:24.891821   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem --> /usr/share/ca-certificates/5324.pem (1338 bytes)
	I0416 16:56:24.933883   12816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 16:56:24.975091   12816 ssh_runner.go:195] Run: openssl version
	I0416 16:56:24.994129   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/53242.pem && ln -fs /usr/share/ca-certificates/53242.pem /etc/ssl/certs/53242.pem"
	I0416 16:56:25.021821   12816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/53242.pem
	I0416 16:56:25.028512   12816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:35 /usr/share/ca-certificates/53242.pem
	I0416 16:56:25.037989   12816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/53242.pem
	I0416 16:56:25.054924   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/53242.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 16:56:25.080011   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 16:56:25.106815   12816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:25.113980   12816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:25.126339   12816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:56:25.144599   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 16:56:25.170309   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5324.pem && ln -fs /usr/share/ca-certificates/5324.pem /etc/ssl/certs/5324.pem"
	I0416 16:56:25.199080   12816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5324.pem
	I0416 16:56:25.206080   12816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:35 /usr/share/ca-certificates/5324.pem
	I0416 16:56:25.214031   12816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5324.pem
	I0416 16:56:25.237026   12816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5324.pem /etc/ssl/certs/51391683.0"
	I0416 16:56:25.266837   12816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 16:56:25.273408   12816 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 16:56:25.273858   12816 kubeadm.go:391] StartCluster: {Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clu
sterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:56:25.281991   12816 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0416 16:56:25.314891   12816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0416 16:56:25.342248   12816 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 16:56:25.368032   12816 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 16:56:25.385737   12816 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 16:56:25.385737   12816 kubeadm.go:156] found existing configuration files:
	
	I0416 16:56:25.393851   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 16:56:25.410393   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 16:56:25.421874   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 16:56:25.453762   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 16:56:25.468769   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 16:56:25.477353   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 16:56:25.501898   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 16:56:25.515888   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 16:56:25.524885   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 16:56:25.548518   12816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 16:56:25.563660   12816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 16:56:25.572269   12816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 16:56:25.587981   12816 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 16:56:25.791977   12816 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 16:56:25.791977   12816 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 16:56:25.958638   12816 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 16:56:25.959035   12816 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 16:56:25.959403   12816 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 16:56:26.228464   12816 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 16:56:26.229544   12816 out.go:204]   - Generating certificates and keys ...
	I0416 16:56:26.229862   12816 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 16:56:26.230882   12816 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 16:56:26.359024   12816 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0416 16:56:26.583044   12816 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0416 16:56:26.715543   12816 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0416 16:56:27.014892   12816 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0416 16:56:27.414264   12816 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0416 16:56:27.414467   12816 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-022600 localhost] and IPs [172.19.81.207 127.0.0.1 ::1]
	I0416 16:56:27.642396   12816 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0416 16:56:27.642770   12816 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-022600 localhost] and IPs [172.19.81.207 127.0.0.1 ::1]
	I0416 16:56:27.844566   12816 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0416 16:56:28.089475   12816 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0416 16:56:28.543900   12816 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0416 16:56:28.548586   12816 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 16:56:29.051829   12816 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 16:56:29.485679   12816 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 16:56:29.830737   12816 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 16:56:30.055972   12816 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 16:56:30.305118   12816 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 16:56:30.305118   12816 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 16:56:30.310446   12816 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 16:56:30.311113   12816 out.go:204]   - Booting up control plane ...
	I0416 16:56:30.311289   12816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 16:56:30.311970   12816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 16:56:30.317049   12816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 16:56:30.342443   12816 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 16:56:30.345140   12816 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 16:56:30.345140   12816 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 16:56:30.526725   12816 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 16:56:37.142045   12816 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.615653 seconds
	I0416 16:56:37.159025   12816 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 16:56:37.175108   12816 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 16:56:37.707867   12816 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 16:56:37.708715   12816 kubeadm.go:309] [mark-control-plane] Marking the node ha-022600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 16:56:38.222729   12816 kubeadm.go:309] [bootstrap-token] Using token: a3r5qn.ikva200bfcppykg5
	I0416 16:56:38.223819   12816 out.go:204]   - Configuring RBAC rules ...
	I0416 16:56:38.224231   12816 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 16:56:38.232416   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 16:56:38.244982   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 16:56:38.249926   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 16:56:38.257723   12816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 16:56:38.262029   12816 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 16:56:38.279883   12816 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 16:56:38.592701   12816 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 16:56:38.638273   12816 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 16:56:38.639572   12816 kubeadm.go:309] 
	I0416 16:56:38.640154   12816 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 16:56:38.640230   12816 kubeadm.go:309] 
	I0416 16:56:38.640982   12816 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 16:56:38.641038   12816 kubeadm.go:309] 
	I0416 16:56:38.641299   12816 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 16:56:38.641581   12816 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 16:56:38.641765   12816 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 16:56:38.641765   12816 kubeadm.go:309] 
	I0416 16:56:38.641989   12816 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 16:56:38.642031   12816 kubeadm.go:309] 
	I0416 16:56:38.642184   12816 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 16:56:38.642228   12816 kubeadm.go:309] 
	I0416 16:56:38.642350   12816 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 16:56:38.642660   12816 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 16:56:38.642862   12816 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 16:56:38.642900   12816 kubeadm.go:309] 
	I0416 16:56:38.643166   12816 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 16:56:38.643426   12816 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 16:56:38.643426   12816 kubeadm.go:309] 
	I0416 16:56:38.643613   12816 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token a3r5qn.ikva200bfcppykg5 \
	I0416 16:56:38.643867   12816 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c \
	I0416 16:56:38.643909   12816 kubeadm.go:309] 	--control-plane 
	I0416 16:56:38.643961   12816 kubeadm.go:309] 
	I0416 16:56:38.644233   12816 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 16:56:38.644272   12816 kubeadm.go:309] 
	I0416 16:56:38.644444   12816 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token a3r5qn.ikva200bfcppykg5 \
	I0416 16:56:38.644734   12816 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c 
	I0416 16:56:38.647455   12816 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 16:56:38.647488   12816 cni.go:84] Creating CNI manager for ""
	I0416 16:56:38.647539   12816 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0416 16:56:38.648246   12816 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0416 16:56:38.657141   12816 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0416 16:56:38.671263   12816 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0416 16:56:38.671263   12816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0416 16:56:38.722410   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0416 16:56:39.265655   12816 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 16:56:39.279279   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:39.279279   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-022600 minikube.k8s.io/updated_at=2024_04_16T16_56_39_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4 minikube.k8s.io/name=ha-022600 minikube.k8s.io/primary=true
	I0416 16:56:39.290244   12816 ops.go:34] apiserver oom_adj: -16
	I0416 16:56:39.441163   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:39.950155   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:40.453751   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:40.955147   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:41.455931   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:41.953044   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:42.454696   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:42.949299   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:43.454962   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:43.953447   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:44.456402   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:44.956686   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:45.449476   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:45.951602   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:46.451988   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:46.949212   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:47.449356   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:47.950703   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:48.458777   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:48.956811   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:49.456669   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:49.943595   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:50.443906   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:50.950503   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:51.454863   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:51.944285   12816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:56:52.083562   12816 kubeadm.go:1107] duration metric: took 12.8170858s to wait for elevateKubeSystemPrivileges
	W0416 16:56:52.083816   12816 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 16:56:52.083816   12816 kubeadm.go:393] duration metric: took 26.808438s to StartCluster
	I0416 16:56:52.083816   12816 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:52.084214   12816 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:56:52.086643   12816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:56:52.088384   12816 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 16:56:52.088384   12816 start.go:240] waiting for startup goroutines ...
	I0416 16:56:52.088384   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0416 16:56:52.088384   12816 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 16:56:52.088630   12816 addons.go:69] Setting storage-provisioner=true in profile "ha-022600"
	I0416 16:56:52.088732   12816 addons.go:234] Setting addon storage-provisioner=true in "ha-022600"
	I0416 16:56:52.088732   12816 addons.go:69] Setting default-storageclass=true in profile "ha-022600"
	I0416 16:56:52.088850   12816 host.go:66] Checking if "ha-022600" exists ...
	I0416 16:56:52.088964   12816 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-022600"
	I0416 16:56:52.088964   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:56:52.090289   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:52.090671   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:52.207597   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.80.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0416 16:56:52.469504   12816 start.go:946] {"host.minikube.internal": 172.19.80.1} host record injected into CoreDNS's ConfigMap
	I0416 16:56:54.164683   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:54.164683   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:54.165583   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:54.165635   12816 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:56:54.165635   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:54.166734   12816 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 16:56:54.166340   12816 kapi.go:59] client config for ha-022600: &rest.Config{Host:"https://172.19.95.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-022600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-022600\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef16c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 16:56:54.167133   12816 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 16:56:54.167133   12816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 16:56:54.167133   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:54.167791   12816 cert_rotation.go:137] Starting client certificate rotation controller
	I0416 16:56:54.168180   12816 addons.go:234] Setting addon default-storageclass=true in "ha-022600"
	I0416 16:56:54.168347   12816 host.go:66] Checking if "ha-022600" exists ...
	I0416 16:56:54.169251   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:56.312581   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:56.312988   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:56.313046   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:56.313270   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:56.313270   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:56.313270   12816 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 16:56:56.313270   12816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 16:56:56.313270   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600 ).state
	I0416 16:56:58.330392   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:56:58.330392   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:58.330966   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600 ).networkadapters[0]).ipaddresses[0]
	I0416 16:56:58.735727   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:56:58.735876   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:56:58.736103   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:56:58.898469   12816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 16:57:00.675802   12816 main.go:141] libmachine: [stdout =====>] : 172.19.81.207
	
	I0416 16:57:00.675802   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:00.676245   12816 sshutil.go:53] new ssh client: &{IP:172.19.81.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600\id_rsa Username:docker}
	I0416 16:57:00.828151   12816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 16:57:01.248041   12816 round_trippers.go:463] GET https://172.19.95.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0416 16:57:01.248041   12816 round_trippers.go:469] Request Headers:
	I0416 16:57:01.248041   12816 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:57:01.248041   12816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:57:01.261890   12816 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0416 16:57:01.262478   12816 round_trippers.go:463] PUT https://172.19.95.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0416 16:57:01.262478   12816 round_trippers.go:469] Request Headers:
	I0416 16:57:01.262478   12816 round_trippers.go:473]     Content-Type: application/json
	I0416 16:57:01.262478   12816 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:57:01.262478   12816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 16:57:01.268964   12816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 16:57:01.269995   12816 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0416 16:57:01.270495   12816 addons.go:505] duration metric: took 9.181591s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0416 16:57:01.270576   12816 start.go:245] waiting for cluster config update ...
	I0416 16:57:01.270618   12816 start.go:254] writing updated cluster config ...
	I0416 16:57:01.271859   12816 out.go:177] 
	I0416 16:57:01.284169   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:57:01.284169   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:57:01.285951   12816 out.go:177] * Starting "ha-022600-m02" control-plane node in "ha-022600" cluster
	I0416 16:57:01.286952   12816 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 16:57:01.286952   12816 cache.go:56] Caching tarball of preloaded images
	I0416 16:57:01.286952   12816 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 16:57:01.286952   12816 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 16:57:01.286952   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:57:01.296247   12816 start.go:360] acquireMachinesLock for ha-022600-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:57:01.297324   12816 start.go:364] duration metric: took 1.0773ms to acquireMachinesLock for "ha-022600-m02"
	I0416 16:57:01.297559   12816 start.go:93] Provisioning new machine with config: &{Name:ha-022600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-022600 Namespace:default APIServerHAVIP:172.19.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.81.207 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 16:57:01.297559   12816 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0416 16:57:01.297559   12816 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 16:57:01.297559   12816 start.go:159] libmachine.API.Create for "ha-022600" (driver="hyperv")
	I0416 16:57:01.297559   12816 client.go:168] LocalClient.Create starting
	I0416 16:57:01.298838   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0416 16:57:01.299147   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:57:01.299147   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:57:01.299293   12816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0416 16:57:01.299468   12816 main.go:141] libmachine: Decoding PEM data...
	I0416 16:57:01.299468   12816 main.go:141] libmachine: Parsing certificate...
	I0416 16:57:01.299468   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0416 16:57:03.017072   12816 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0416 16:57:03.017279   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:03.017366   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0416 16:57:04.580895   12816 main.go:141] libmachine: [stdout =====>] : False
	
	I0416 16:57:04.580895   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:04.580895   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:57:05.984295   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:57:05.984295   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:05.984295   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:57:09.314760   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:57:09.314760   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:09.316740   12816 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 16:57:09.669552   12816 main.go:141] libmachine: Creating SSH key...
	I0416 16:57:10.010472   12816 main.go:141] libmachine: Creating VM...
	I0416 16:57:10.010472   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 16:57:12.690022   12816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 16:57:12.690022   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:12.690107   12816 main.go:141] libmachine: Using switch "Default Switch"
	I0416 16:57:12.690185   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 16:57:14.267157   12816 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 16:57:14.267157   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:14.267157   12816 main.go:141] libmachine: Creating VHD
	I0416 16:57:14.267157   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0416 16:57:17.749511   12816 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : FE960248-03C1-43D6-B7AE-A60D4C86308B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0416 16:57:17.749511   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:17.749511   12816 main.go:141] libmachine: Writing magic tar header
	I0416 16:57:17.749511   12816 main.go:141] libmachine: Writing SSH key tar header
	I0416 16:57:17.758158   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0416 16:57:20.709379   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:20.709950   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:20.710019   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\disk.vhd' -SizeBytes 20000MB
	I0416 16:57:23.025729   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:23.025729   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:23.025729   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-022600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0416 16:57:26.131923   12816 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-022600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0416 16:57:26.131923   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:26.131923   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-022600-m02 -DynamicMemoryEnabled $false
	I0416 16:57:28.159153   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:28.159229   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:28.159409   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-022600-m02 -Count 2
	I0416 16:57:30.126033   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:30.126033   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:30.126033   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-022600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\boot2docker.iso'
	I0416 16:57:32.420739   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:32.421735   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:32.421878   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-022600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\disk.vhd'
	I0416 16:57:34.779822   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:34.779822   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:34.779822   12816 main.go:141] libmachine: Starting VM...
	I0416 16:57:34.780971   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-022600-m02
	I0416 16:57:37.369505   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:37.369687   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:37.369687   12816 main.go:141] libmachine: Waiting for host to start...
	I0416 16:57:37.369767   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:39.415029   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:39.415029   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:39.415286   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:41.685132   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:41.685132   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:42.700464   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:44.674039   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:44.674039   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:44.674039   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:46.993492   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:46.993492   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:48.000886   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:49.992438   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:49.992894   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:49.992930   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:52.274971   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:52.274971   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:53.290891   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:57:55.287716   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:57:55.287962   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:55.288037   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:57:57.564053   12816 main.go:141] libmachine: [stdout =====>] : 
	I0416 16:57:57.564053   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:57:58.572803   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:00.584542   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:00.584542   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:00.584542   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:02.905327   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:02.905327   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:02.905391   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:04.899133   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:04.899133   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:04.899479   12816 machine.go:94] provisionDockerMachine start ...
	I0416 16:58:04.899479   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:06.914221   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:06.914221   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:06.914869   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:09.273511   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:09.273546   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:09.277783   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:09.278406   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:09.278406   12816 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 16:58:09.413281   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 16:58:09.413281   12816 buildroot.go:166] provisioning hostname "ha-022600-m02"
	I0416 16:58:09.413281   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:11.438626   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:11.438626   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:11.439079   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:13.801181   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:13.801181   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:13.805295   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:13.805684   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:13.805684   12816 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-022600-m02 && echo "ha-022600-m02" | sudo tee /etc/hostname
	I0416 16:58:13.957933   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-022600-m02
	
	I0416 16:58:13.958021   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:15.863768   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:15.863768   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:15.863768   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:18.176996   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:18.178002   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:18.182057   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:18.182681   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:18.182681   12816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-022600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-022600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-022600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:58:18.315751   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:58:18.315853   12816 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 16:58:18.315853   12816 buildroot.go:174] setting up certificates
	I0416 16:58:18.315853   12816 provision.go:84] configureAuth start
	I0416 16:58:18.315853   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:20.243862   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:20.243928   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:20.243928   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:22.525833   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:22.525945   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:22.526057   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:24.418671   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:24.418671   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:24.418894   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:26.735560   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:26.735560   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:26.735560   12816 provision.go:143] copyHostCerts
	I0416 16:58:26.736546   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 16:58:26.736627   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 16:58:26.736627   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 16:58:26.736627   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 16:58:26.737290   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 16:58:26.737900   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 16:58:26.737900   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 16:58:26.738191   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 16:58:26.738908   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 16:58:26.738977   12816 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 16:58:26.738977   12816 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 16:58:26.738977   12816 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 16:58:26.739652   12816 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-022600-m02 san=[127.0.0.1 172.19.80.125 ha-022600-m02 localhost minikube]
	I0416 16:58:26.917277   12816 provision.go:177] copyRemoteCerts
	I0416 16:58:26.926308   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:58:26.926600   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:28.829360   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:28.829360   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:28.830343   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:31.113681   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:31.113681   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:31.113681   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:58:31.229222   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3026703s)
	I0416 16:58:31.229222   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 16:58:31.229700   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 16:58:31.279666   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 16:58:31.280307   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0416 16:58:31.328101   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 16:58:31.328245   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 16:58:31.382563   12816 provision.go:87] duration metric: took 13.065969s to configureAuth
	I0416 16:58:31.382563   12816 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:58:31.383343   12816 config.go:182] Loaded profile config "ha-022600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:58:31.383343   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:33.331199   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:33.331199   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:33.331275   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:35.653673   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:35.653721   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:35.656855   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:35.657430   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:35.657430   12816 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 16:58:35.781565   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 16:58:35.781565   12816 buildroot.go:70] root file system type: tmpfs
	I0416 16:58:35.781565   12816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 16:58:35.782090   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:37.695478   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:37.695478   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:37.696344   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:39.956169   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:39.956169   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:39.961057   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:39.961515   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:39.961564   12816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.81.207"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 16:58:40.123664   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.81.207
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 16:58:40.123818   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:42.064878   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:42.064974   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:42.065152   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:44.326252   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:44.326252   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:44.330103   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:44.330731   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:44.330731   12816 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 16:58:46.283136   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 16:58:46.283253   12816 machine.go:97] duration metric: took 41.3814279s to provisionDockerMachine
	I0416 16:58:46.283253   12816 client.go:171] duration metric: took 1m44.9797412s to LocalClient.Create
	I0416 16:58:46.283253   12816 start.go:167] duration metric: took 1m44.9797412s to libmachine.API.Create "ha-022600"
	I0416 16:58:46.283253   12816 start.go:293] postStartSetup for "ha-022600-m02" (driver="hyperv")
	I0416 16:58:46.283345   12816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:58:46.292724   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:58:46.292724   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:48.207625   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:48.207625   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:48.207625   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:50.480821   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:50.480821   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:50.480821   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:58:50.575284   12816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2823171s)
	I0416 16:58:50.584260   12816 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:58:50.591292   12816 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:58:50.591292   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 16:58:50.591292   12816 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 16:58:50.591900   12816 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 16:58:50.591900   12816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 16:58:50.601073   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 16:58:50.618807   12816 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 16:58:50.671301   12816 start.go:296] duration metric: took 4.3877068s for postStartSetup
	I0416 16:58:50.673161   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:52.621684   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:52.622252   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:52.622252   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:54.923435   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:54.923435   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:54.923763   12816 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-022600\config.json ...
	I0416 16:58:54.926483   12816 start.go:128] duration metric: took 1m53.622481s to createHost
	I0416 16:58:54.926657   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:58:56.793105   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:58:56.793105   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:56.793184   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:58:59.024255   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:58:59.025184   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:58:59.029108   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:58:59.029633   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:58:59.029730   12816 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 16:58:59.149333   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713286739.315259098
	
	I0416 16:58:59.149333   12816 fix.go:216] guest clock: 1713286739.315259098
	I0416 16:58:59.149333   12816 fix.go:229] Guest: 2024-04-16 16:58:59.315259098 +0000 UTC Remote: 2024-04-16 16:58:54.9265716 +0000 UTC m=+304.925199701 (delta=4.388687498s)
	I0416 16:58:59.149333   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:01.054656   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:01.054831   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:01.054831   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:03.303195   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:03.303195   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:03.307071   12816 main.go:141] libmachine: Using SSH client type: native
	I0416 16:59:03.307459   12816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.80.125 22 <nil> <nil>}
	I0416 16:59:03.307531   12816 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713286739
	I0416 16:59:03.449024   12816 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 16:58:59 UTC 2024
	
	I0416 16:59:03.449024   12816 fix.go:236] clock set: Tue Apr 16 16:58:59 UTC 2024
	 (err=<nil>)
	I0416 16:59:03.449024   12816 start.go:83] releasing machines lock for "ha-022600-m02", held for 2m2.1447745s
	I0416 16:59:03.450039   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:05.434998   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:05.434998   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:05.434998   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:07.737918   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:07.737918   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:07.739042   12816 out.go:177] * Found network options:
	I0416 16:59:07.739784   12816 out.go:177]   - NO_PROXY=172.19.81.207
	W0416 16:59:07.740404   12816 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 16:59:07.741027   12816 out.go:177]   - NO_PROXY=172.19.81.207
	W0416 16:59:07.741505   12816 proxy.go:119] fail to check proxy env: Error ip not in block
	W0416 16:59:07.742708   12816 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 16:59:07.744988   12816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:59:07.745153   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:07.752817   12816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0416 16:59:07.752817   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-022600-m02 ).state
	I0416 16:59:09.758953   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:09.759721   12816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-022600-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 16:59:12.157582   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:12.158536   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:12.159044   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:59:12.184719   12816 main.go:141] libmachine: [stdout =====>] : 172.19.80.125
	
	I0416 16:59:12.184719   12816 main.go:141] libmachine: [stderr =====>] : 
	I0416 16:59:12.185179   12816 sshutil.go:53] new ssh client: &{IP:172.19.80.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-022600-m02\id_rsa Username:docker}
	I0416 16:59:12.257436   12816 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5043642s)
	W0416 16:59:12.257436   12816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:59:12.266545   12816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:59:12.367206   12816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 16:59:12.367296   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:59:12.367330   12816 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6219642s)
	I0416 16:59:12.367330   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:59:12.423201   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 16:59:12.453988   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 16:59:12.472992   12816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 16:59:12.482991   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 16:59:12.510864   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:59:12.538866   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 16:59:12.565866   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 16:59:12.597751   12816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:59:12.622761   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 16:59:12.648905   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 16:59:12.674904   12816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 16:59:12.713452   12816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:59:12.741495   12816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:59:12.768497   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:59:12.975524   12816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 16:59:13.011635   12816 start.go:494] detecting cgroup driver to use...
	I0416 16:59:13.023647   12816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 16:59:13.058146   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:59:13.091991   12816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 16:59:13.139058   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:59:13.173081   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:59:13.208242   12816 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 16:59:13.259511   12816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 16:59:13.282094   12816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:59:13.329081   12816 ssh_runner.go:195] Run: which cri-dockerd
	I0416 16:59:13.344832   12816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 16:59:13.362131   12816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 16:59:13.403377   12816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 16:59:13.597444   12816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 16:59:13.768147   12816 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 16:59:13.768278   12816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 16:59:13.808294   12816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:59:13.987216   12816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 17:00:15.104612   12816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1138396s)
	I0416 17:00:15.115049   12816 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0416 17:00:15.145752   12816 out.go:177] 
	W0416 17:00:15.146473   12816 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 16 16:58:45 ha-022600-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.076842920Z" level=info msg="Starting up"
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.077687177Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 16:58:45 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:45.078706068Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.109665355Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138411128Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138448735Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138508447Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138523049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138600164Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138632670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138848110Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.138955930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139030244Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139045347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139142365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.139433520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142495192Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142588309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142778845Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.142795748Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143044695Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143174419Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.143191422Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.152862930Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153144583Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153313214Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153337519Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153354522Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153467543Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.153957434Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154159572Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154195179Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154212082Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154230586Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154258491Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154272393Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154287696Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154303599Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154317302Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154330504Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154344107Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154373612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154392516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154406618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154421121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154434024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154447526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154460128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154474031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154498536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154514539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154525841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154555046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154568249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154583952Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154604755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154629960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154642062Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154700973Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.154916114Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155014532Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155030135Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155203567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155302486Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155325090Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155706861Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155796078Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155907599Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 16:58:45 ha-022600-m02 dockerd[672]: time="2024-04-16T16:58:45.155947306Z" level=info msg="containerd successfully booted in 0.047582s"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.119001526Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.129323458Z" level=info msg="Loading containers: start."
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.358382320Z" level=info msg="Loading containers: done."
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.377033580Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.377149301Z" level=info msg="Daemon has completed initialization"
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.447556885Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 16:58:46 ha-022600-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 16 16:58:46 ha-022600-m02 dockerd[666]: time="2024-04-16T16:58:46.449134569Z" level=info msg="API listen on [::]:2376"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.178053148Z" level=info msg="Processing signal 'terminated'"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.179830517Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 16:59:14 ha-022600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.180814055Z" level=info msg="Daemon shutdown complete"
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.181020363Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 16:59:14 ha-022600-m02 dockerd[666]: time="2024-04-16T16:59:14.181054564Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 16:59:15 ha-022600-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 16:59:15 ha-022600-m02 dockerd[1019]: time="2024-04-16T16:59:15.248212596Z" level=info msg="Starting up"
	Apr 16 17:00:15 ha-022600-m02 dockerd[1019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 16 17:00:15 ha-022600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0416 17:00:15.146611   12816 out.go:239] * 
	W0416 17:00:15.147806   12816 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 17:00:15.148383   12816 out.go:177] 
	
	
	==> Docker <==
	Apr 16 17:18:34 ha-022600 dockerd[1325]: 2024/04/16 17:18:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:19:36 ha-022600 dockerd[1325]: 2024/04/16 17:19:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:19:36 ha-022600 dockerd[1325]: 2024/04/16 17:19:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:19:36 ha-022600 dockerd[1325]: 2024/04/16 17:19:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:19:36 ha-022600 dockerd[1325]: 2024/04/16 17:19:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:19:36 ha-022600 dockerd[1325]: 2024/04/16 17:19:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:19:36 ha-022600 dockerd[1325]: 2024/04/16 17:19:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:19:36 ha-022600 dockerd[1325]: 2024/04/16 17:19:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:19:36 ha-022600 dockerd[1325]: 2024/04/16 17:19:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:21:09 ha-022600 dockerd[1325]: 2024/04/16 17:21:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:21:09 ha-022600 dockerd[1325]: 2024/04/16 17:21:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:21:09 ha-022600 dockerd[1325]: 2024/04/16 17:21:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:21:09 ha-022600 dockerd[1325]: 2024/04/16 17:21:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:21:09 ha-022600 dockerd[1325]: 2024/04/16 17:21:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:21:10 ha-022600 dockerd[1325]: 2024/04/16 17:21:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:21:10 ha-022600 dockerd[1325]: 2024/04/16 17:21:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:21:10 ha-022600 dockerd[1325]: 2024/04/16 17:21:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:21:50 ha-022600 dockerd[1325]: 2024/04/16 17:21:50 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:21:50 ha-022600 dockerd[1325]: 2024/04/16 17:21:50 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:21:50 ha-022600 dockerd[1325]: 2024/04/16 17:21:50 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:21:50 ha-022600 dockerd[1325]: 2024/04/16 17:21:50 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:21:50 ha-022600 dockerd[1325]: 2024/04/16 17:21:50 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:21:50 ha-022600 dockerd[1325]: 2024/04/16 17:21:50 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:21:50 ha-022600 dockerd[1325]: 2024/04/16 17:21:50 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 17:21:50 ha-022600 dockerd[1325]: 2024/04/16 17:21:50 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d38b1a5f4caa8       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   24 minutes ago      Running             busybox                   0                   8a4de3aa24af1       busybox-7fdf7869d9-rpfpf
	3fe545bfad4e6       cbb01a7bd410d                                                                                         28 minutes ago      Running             coredns                   0                   093278b3840ef       coredns-76f75df574-qm89x
	979dee88be2b4       cbb01a7bd410d                                                                                         28 minutes ago      Running             coredns                   0                   4ad38b0d59335       coredns-76f75df574-ww2r6
	257879ecf06b2       6e38f40d628db                                                                                         28 minutes ago      Running             storage-provisioner       0                   bf991c3e34e2d       storage-provisioner
	be245de9ef545       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              28 minutes ago      Running             kindnet-cni               0                   92c35b3fd0967       kindnet-mwqvl
	05db92f49e0df       a1d263b5dc5b0                                                                                         28 minutes ago      Running             kube-proxy                0                   12380f49c1509       kube-proxy-2vddt
	d1ba82cd26254       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     28 minutes ago      Running             kube-vip                  0                   fa2c75c4c8d59       kube-vip-ha-022600
	a7fb69539df62       6052a25da3f97                                                                                         28 minutes ago      Running             kube-controller-manager   0                   b536621e20d4b       kube-controller-manager-ha-022600
	4fd5df8c9fd37       39f995c9f1996                                                                                         28 minutes ago      Running             kube-apiserver            0                   5a7a1e80caeb4       kube-apiserver-ha-022600
	e042d71e8b0e8       8c390d98f50c0                                                                                         28 minutes ago      Running             kube-scheduler            0                   5a2551c91a1b6       kube-scheduler-ha-022600
	c29b0762ff0bf       3861cfcd7c04c                                                                                         28 minutes ago      Running             etcd                      0                   c8a9aa3126cf5       etcd-ha-022600
	
	
	==> coredns [3fe545bfad4e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = db872c9fdf31f8d8ff61123f2a1e38a38b951fa043b9e42cdb76f01d23889e560885a7bdef735e757fd28e65f13e44b1d5d7b5def31861f6a98cd0279fbc18c8
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:55981 - 4765 "HINFO IN 3735046377920793891.8143170502200932773. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.058936595s
	[INFO] 10.244.0.4:43350 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000388921s
	[INFO] 10.244.0.4:35317 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.052221997s
	[INFO] 10.244.0.4:52074 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.040069369s
	[INFO] 10.244.0.4:49068 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.053312593s
	[INFO] 10.244.0.4:54711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123507s
	[INFO] 10.244.0.4:44694 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037006811s
	[INFO] 10.244.0.4:33399 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124606s
	[INFO] 10.244.0.4:37329 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000241612s
	[INFO] 10.244.0.4:57333 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131407s
	[INFO] 10.244.0.4:38806 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060403s
	[INFO] 10.244.0.4:48948 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000263914s
	[INFO] 10.244.0.4:51825 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000177309s
	[INFO] 10.244.0.4:53272 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00018301s
	
	
	==> coredns [979dee88be2b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = db872c9fdf31f8d8ff61123f2a1e38a38b951fa043b9e42cdb76f01d23889e560885a7bdef735e757fd28e65f13e44b1d5d7b5def31861f6a98cd0279fbc18c8
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50127 - 24072 "HINFO IN 7665836187497317301.2285362183679153792. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027543487s
	[INFO] 10.244.0.4:34822 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000224011s
	[INFO] 10.244.0.4:48911 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000349218s
	[INFO] 10.244.0.4:43369 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.023699624s
	[INFO] 10.244.0.4:56309 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000258914s
	[INFO] 10.244.0.4:36791 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.003463479s
	[INFO] 10.244.0.4:55996 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000301816s
	[INFO] 10.244.0.4:35967 - 5 "PTR IN 1.80.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000116506s
	
	
	==> describe nodes <==
	Name:               ha-022600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-022600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=ha-022600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T16_56_39_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 16:56:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-022600
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 17:25:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 17:21:40 +0000   Tue, 16 Apr 2024 16:56:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 17:21:40 +0000   Tue, 16 Apr 2024 16:56:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 17:21:40 +0000   Tue, 16 Apr 2024 16:56:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 17:21:40 +0000   Tue, 16 Apr 2024 16:57:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.81.207
	  Hostname:    ha-022600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b4674338fa494bbcb2e21e2b4385c5e1
	  System UUID:                201025fc-0c03-cc49-a194-29d6500971a2
	  Boot ID:                    6ae5bedd-6e8e-4f58-b08c-8e9912fd04de
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-rpfpf             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 coredns-76f75df574-qm89x             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 coredns-76f75df574-ww2r6             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-ha-022600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kindnet-mwqvl                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      28m
	  kube-system                 kube-apiserver-ha-022600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-ha-022600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-2vddt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-ha-022600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-vip-ha-022600                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 28m   kube-proxy       
	  Normal  Starting                 28m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m   kubelet          Node ha-022600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m   kubelet          Node ha-022600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m   kubelet          Node ha-022600 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28m   node-controller  Node ha-022600 event: Registered Node ha-022600 in Controller
	  Normal  NodeReady                28m   kubelet          Node ha-022600 status is now: NodeReady
	
	
	Name:               ha-022600-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-022600-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=ha-022600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_16T17_16_38_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 17:16:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-022600-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 17:24:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 17:21:43 +0000   Tue, 16 Apr 2024 17:16:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 17:21:43 +0000   Tue, 16 Apr 2024 17:16:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 17:21:43 +0000   Tue, 16 Apr 2024 17:16:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 17:21:43 +0000   Tue, 16 Apr 2024 17:16:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.93.94
	  Hostname:    ha-022600-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 cefa169b716045589e59382d0939ad48
	  System UUID:                25782c5b-4e02-0547-b063-db6b9c5f1f5b
	  Boot ID:                    e7c67d41-aa2d-47a1-952b-fa7ff5422e05
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-mnl84    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kindnet-7c2px               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m29s
	  kube-system                 kube-proxy-ss5lp            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m19s                  kube-proxy       
	  Normal  Starting                 8m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m29s (x2 over 8m29s)  kubelet          Node ha-022600-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m29s (x2 over 8m29s)  kubelet          Node ha-022600-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m29s (x2 over 8m29s)  kubelet          Node ha-022600-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m25s                  node-controller  Node ha-022600-m03 event: Registered Node ha-022600-m03 in Controller
	  Normal  NodeReady                8m12s                  kubelet          Node ha-022600-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.656516] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr16 16:55] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.165290] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[Apr16 16:56] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.091843] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.493988] systemd-fstab-generator[987]: Ignoring "noauto" option for root device
	[  +0.172637] systemd-fstab-generator[999]: Ignoring "noauto" option for root device
	[  +0.230010] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	[  +2.695048] systemd-fstab-generator[1185]: Ignoring "noauto" option for root device
	[  +0.219400] systemd-fstab-generator[1197]: Ignoring "noauto" option for root device
	[  +0.196554] systemd-fstab-generator[1209]: Ignoring "noauto" option for root device
	[  +0.267217] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
	[ +11.053282] systemd-fstab-generator[1317]: Ignoring "noauto" option for root device
	[  +0.095458] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.012264] systemd-fstab-generator[1521]: Ignoring "noauto" option for root device
	[  +6.758798] systemd-fstab-generator[1720]: Ignoring "noauto" option for root device
	[  +0.093227] kauditd_printk_skb: 73 callbacks suppressed
	[  +7.850543] systemd-fstab-generator[2213]: Ignoring "noauto" option for root device
	[  +0.130310] kauditd_printk_skb: 72 callbacks suppressed
	[ +15.381320] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.386371] kauditd_printk_skb: 29 callbacks suppressed
	[Apr16 17:00] hrtimer: interrupt took 5042261 ns
	[  +0.908827] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [c29b0762ff0b] <==
	{"level":"info","ts":"2024-04-16T17:11:33.360995Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1502}
	{"level":"info","ts":"2024-04-16T17:11:33.366072Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1502,"took":"4.116913ms","hash":127222243,"current-db-size-bytes":2433024,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1818624,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-04-16T17:11:33.366162Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":127222243,"revision":1502,"compact-revision":964}
	{"level":"info","ts":"2024-04-16T17:15:11.421098Z","caller":"traceutil/trace.go:171","msg":"trace[1208553513] transaction","detail":"{read_only:false; response_revision:2431; number_of_response:1; }","duration":"155.410586ms","start":"2024-04-16T17:15:11.265667Z","end":"2024-04-16T17:15:11.421077Z","steps":["trace[1208553513] 'process raft request'  (duration: 155.135072ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:16:31.529032Z","caller":"traceutil/trace.go:171","msg":"trace[505251683] linearizableReadLoop","detail":"{readStateIndex:2832; appliedIndex:2831; }","duration":"107.445309ms","start":"2024-04-16T17:16:31.421572Z","end":"2024-04-16T17:16:31.529017Z","steps":["trace[505251683] 'read index received'  (duration: 107.319103ms)","trace[505251683] 'applied index is now lower than readState.Index'  (duration: 125.606µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-16T17:16:31.529184Z","caller":"traceutil/trace.go:171","msg":"trace[359290184] transaction","detail":"{read_only:false; response_revision:2575; number_of_response:1; }","duration":"197.441024ms","start":"2024-04-16T17:16:31.331735Z","end":"2024-04-16T17:16:31.529176Z","steps":["trace[359290184] 'process raft request'  (duration: 197.196912ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:16:31.529431Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.83703ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1110"}
	{"level":"info","ts":"2024-04-16T17:16:31.52969Z","caller":"traceutil/trace.go:171","msg":"trace[1576069612] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2575; }","duration":"108.130545ms","start":"2024-04-16T17:16:31.421545Z","end":"2024-04-16T17:16:31.529676Z","steps":["trace[1576069612] 'agreement among raft nodes before linearized reading'  (duration: 107.801628ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:16:33.727834Z","caller":"traceutil/trace.go:171","msg":"trace[1449824028] transaction","detail":"{read_only:false; response_revision:2578; number_of_response:1; }","duration":"364.497189ms","start":"2024-04-16T17:16:33.363317Z","end":"2024-04-16T17:16:33.727815Z","steps":["trace[1449824028] 'process raft request'  (duration: 364.339681ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:16:33.729115Z","caller":"traceutil/trace.go:171","msg":"trace[948704194] linearizableReadLoop","detail":"{readStateIndex:2837; appliedIndex:2836; }","duration":"283.56914ms","start":"2024-04-16T17:16:33.445533Z","end":"2024-04-16T17:16:33.729102Z","steps":["trace[948704194] 'read index received'  (duration: 282.906606ms)","trace[948704194] 'applied index is now lower than readState.Index'  (duration: 662.034µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-16T17:16:33.72965Z","caller":"traceutil/trace.go:171","msg":"trace[1908879286] transaction","detail":"{read_only:false; response_revision:2579; number_of_response:1; }","duration":"291.495046ms","start":"2024-04-16T17:16:33.438143Z","end":"2024-04-16T17:16:33.729638Z","steps":["trace[1908879286] 'process raft request'  (duration: 290.677204ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:16:33.729668Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-16T17:16:33.363297Z","time spent":"364.643596ms","remote":"127.0.0.1:49456","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":25,"response count":0,"response size":38,"request content":"compare:<key:\"compact_rev_key\" version:3 > success:<request_put:<key:\"compact_rev_key\" value_size:4 >> failure:<request_range:<key:\"compact_rev_key\" > >"}
	{"level":"warn","ts":"2024-04-16T17:16:33.729962Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.040139ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:434"}
	{"level":"info","ts":"2024-04-16T17:16:33.73064Z","caller":"traceutil/trace.go:171","msg":"trace[1591257630] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:2579; }","duration":"186.677072ms","start":"2024-04-16T17:16:33.543885Z","end":"2024-04-16T17:16:33.730562Z","steps":["trace[1591257630] 'agreement among raft nodes before linearized reading'  (duration: 185.842129ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:16:33.730022Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"284.488987ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-16T17:16:33.731097Z","caller":"traceutil/trace.go:171","msg":"trace[339406949] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2579; }","duration":"285.581443ms","start":"2024-04-16T17:16:33.445505Z","end":"2024-04-16T17:16:33.731087Z","steps":["trace[339406949] 'agreement among raft nodes before linearized reading'  (duration: 284.501387ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:16:33.730066Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.750168ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1110"}
	{"level":"info","ts":"2024-04-16T17:16:33.731323Z","caller":"traceutil/trace.go:171","msg":"trace[1323315847] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2579; }","duration":"143.028733ms","start":"2024-04-16T17:16:33.588284Z","end":"2024-04-16T17:16:33.731313Z","steps":["trace[1323315847] 'agreement among raft nodes before linearized reading'  (duration: 141.746268ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:16:33.740796Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2041}
	{"level":"info","ts":"2024-04-16T17:16:33.745817Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2041,"took":"4.568334ms","hash":1427640317,"current-db-size-bytes":2433024,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1806336,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-04-16T17:16:33.746025Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1427640317,"revision":2041,"compact-revision":1502}
	{"level":"info","ts":"2024-04-16T17:16:40.98492Z","caller":"traceutil/trace.go:171","msg":"trace[2045382782] transaction","detail":"{read_only:false; response_revision:2627; number_of_response:1; }","duration":"150.576419ms","start":"2024-04-16T17:16:40.834317Z","end":"2024-04-16T17:16:40.984893Z","steps":["trace[2045382782] 'process raft request'  (duration: 150.385009ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:21:33.757276Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2578}
	{"level":"info","ts":"2024-04-16T17:21:33.762061Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2578,"took":"4.259818ms","hash":879522910,"current-db-size-bytes":2433024,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1994752,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-04-16T17:21:33.762168Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":879522910,"revision":2578,"compact-revision":2041}
	
	
	==> kernel <==
	 17:25:07 up 30 min,  0 users,  load average: 0.21, 0.22, 0.19
	Linux ha-022600 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [be245de9ef54] <==
	I0416 17:24:01.988511       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:24:12.003675       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:24:12.004070       1 main.go:227] handling current node
	I0416 17:24:12.004132       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:24:12.004159       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:24:22.012425       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:24:22.013310       1 main.go:227] handling current node
	I0416 17:24:22.013364       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:24:22.013381       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:24:32.024707       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:24:32.025199       1 main.go:227] handling current node
	I0416 17:24:32.025358       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:24:32.025406       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:24:42.031914       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:24:42.032021       1 main.go:227] handling current node
	I0416 17:24:42.032034       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:24:42.032042       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:24:52.045626       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:24:52.045743       1 main.go:227] handling current node
	I0416 17:24:52.045757       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:24:52.046452       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	I0416 17:25:02.053088       1 main.go:223] Handling node with IPs: map[172.19.81.207:{}]
	I0416 17:25:02.053180       1 main.go:227] handling current node
	I0416 17:25:02.053193       1 main.go:223] Handling node with IPs: map[172.19.93.94:{}]
	I0416 17:25:02.053200       1 main.go:250] Node ha-022600-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [4fd5df8c9fd3] <==
	I0416 16:56:35.510308       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0416 16:56:35.512679       1 aggregator.go:165] initial CRD sync complete...
	I0416 16:56:35.516211       1 autoregister_controller.go:141] Starting autoregister controller
	I0416 16:56:35.516249       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0416 16:56:35.516256       1 cache.go:39] Caches are synced for autoregister controller
	I0416 16:56:35.517473       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0416 16:56:35.522352       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0416 16:56:35.529558       1 controller.go:624] quota admission added evaluator for: namespaces
	I0416 16:56:35.542494       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0416 16:56:36.411016       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0416 16:56:36.418409       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0416 16:56:36.419376       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0416 16:56:37.172553       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0416 16:56:37.235069       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0416 16:56:37.370838       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0416 16:56:37.381797       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.81.207]
	I0416 16:56:37.383264       1 controller.go:624] quota admission added evaluator for: endpoints
	I0416 16:56:37.388718       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0416 16:56:37.435733       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0416 16:56:38.737496       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0416 16:56:38.764389       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0416 16:56:38.781093       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0416 16:56:51.466047       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0416 16:56:51.868826       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	http2: server: error reading preface from client 172.19.93.94:54156: read tcp 172.19.95.254:8443->172.19.93.94:54156: read: connection reset by peer
	
	
	==> kube-controller-manager [a7fb69539df6] <==
	I0416 16:57:07.224903       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="88.905µs"
	I0416 16:57:07.277301       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="19.898845ms"
	I0416 16:57:07.277810       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="53.303µs"
	I0416 17:00:45.709324       1 event.go:376] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7fdf7869d9 to 3"
	I0416 17:00:45.728545       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-rpfpf"
	I0416 17:00:45.745464       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-mnl84"
	I0416 17:00:45.756444       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-gph6r"
	I0416 17:00:45.770175       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="62.082711ms"
	I0416 17:00:45.784213       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="13.744211ms"
	I0416 17:00:45.810992       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="26.530372ms"
	I0416 17:00:45.811146       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="42.802µs"
	I0416 17:00:48.413892       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="8.465463ms"
	I0416 17:00:48.413981       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="40.302µs"
	I0416 17:16:37.436480       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-022600-m03\" does not exist"
	I0416 17:16:37.446130       1 range_allocator.go:380] "Set node PodCIDR" node="ha-022600-m03" podCIDRs=["10.244.1.0/24"]
	I0416 17:16:37.459239       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7c2px"
	I0416 17:16:37.461522       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ss5lp"
	I0416 17:16:41.186805       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-022600-m03"
	I0416 17:16:41.187824       1 event.go:376] "Event occurred" object="ha-022600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-022600-m03 event: Registered Node ha-022600-m03 in Controller"
	I0416 17:16:54.835196       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-022600-m03"
	I0416 17:21:10.057845       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="1.643684ms"
	I0416 17:21:10.062178       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="53.103µs"
	I0416 17:21:10.084166       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="65.503µs"
	I0416 17:21:12.764100       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="8.960957ms"
	I0416 17:21:12.764437       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="234.912µs"
	
	
	==> kube-proxy [05db92f49e0d] <==
	I0416 16:56:54.468581       1 server_others.go:72] "Using iptables proxy"
	I0416 16:56:54.505964       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.81.207"]
	I0416 16:56:54.583838       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 16:56:54.584172       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 16:56:54.584273       1 server_others.go:168] "Using iptables Proxier"
	I0416 16:56:54.590060       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 16:56:54.590806       1 server.go:865] "Version info" version="v1.29.3"
	I0416 16:56:54.591014       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 16:56:54.592331       1 config.go:188] "Starting service config controller"
	I0416 16:56:54.592517       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 16:56:54.592625       1 config.go:97] "Starting endpoint slice config controller"
	I0416 16:56:54.592689       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 16:56:54.594058       1 config.go:315] "Starting node config controller"
	I0416 16:56:54.594215       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 16:56:54.693900       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 16:56:54.693964       1 shared_informer.go:318] Caches are synced for service config
	I0416 16:56:54.694328       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [e042d71e8b0e] <==
	W0416 16:56:36.501819       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0416 16:56:36.501922       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0416 16:56:36.507709       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0416 16:56:36.507948       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0416 16:56:36.573671       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 16:56:36.573877       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 16:56:36.602162       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 16:56:36.602205       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 16:56:36.621966       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.622272       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0416 16:56:36.648392       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0416 16:56:36.648623       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0416 16:56:36.694872       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 16:56:36.694970       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 16:56:36.804118       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.805424       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 16:56:36.821863       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.822231       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 16:56:36.866017       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 16:56:36.866298       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0416 16:56:36.904820       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 16:56:36.905097       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 16:56:36.917996       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0416 16:56:36.918036       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0416 16:56:39.298679       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 17:20:38 ha-022600 kubelet[2220]: E0416 17:20:38.994897    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:20:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:20:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:20:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:20:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:21:39 ha-022600 kubelet[2220]: E0416 17:21:39.001981    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:21:39 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:21:39 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:21:39 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:21:39 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:22:38 ha-022600 kubelet[2220]: E0416 17:22:38.995322    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:22:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:22:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:22:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:22:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:23:38 ha-022600 kubelet[2220]: E0416 17:23:38.994530    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:23:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:23:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:23:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:23:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:24:38 ha-022600 kubelet[2220]: E0416 17:24:38.994154    2220 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:24:38 ha-022600 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:24:38 ha-022600 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:24:38 ha-022600 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:24:38 ha-022600 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 17:24:59.747395   12864 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-022600 -n ha-022600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-022600 -n ha-022600: (10.9717949s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-022600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7fdf7869d9-gph6r
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-022600 describe pod busybox-7fdf7869d9-gph6r
helpers_test.go:282: (dbg) kubectl --context ha-022600 describe pod busybox-7fdf7869d9-gph6r:

                                                
                                                
-- stdout --
	Name:             busybox-7fdf7869d9-gph6r
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7fdf7869d9
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7fdf7869d9
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h29q5 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-h29q5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  9m9s (x4 over 24m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  4m9s                default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (196.62s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (210.44s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-318300 --driver=hyperv
E0416 17:29:10.128882    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p image-318300 --driver=hyperv: exit status 90 (3m19.2278265s)

                                                
                                                
-- stdout --
	* [image-318300] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "image-318300" primary control-plane node in "image-318300" cluster
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 17:27:44.910034   11600 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 16 17:29:33 image-318300 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 17:29:33 image-318300 dockerd[668]: time="2024-04-16T17:29:33.841719298Z" level=info msg="Starting up"
	Apr 16 17:29:33 image-318300 dockerd[668]: time="2024-04-16T17:29:33.842616908Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 17:29:33 image-318300 dockerd[668]: time="2024-04-16T17:29:33.843551427Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=674
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.872918794Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.908558928Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.908640447Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.908719065Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.908739570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.908828891Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.909026937Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.909245789Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.909394623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.909416328Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.909427731Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.909610574Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.909959155Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.913325643Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.913547995Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.913707632Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.913789351Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.913928984Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.914050312Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.914075018Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.922521993Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.922601412Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.922621316Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.922688832Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.922722840Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.922834366Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.923311678Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.923625851Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.923728275Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.923746579Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.923761383Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.923776086Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.923789289Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.923803693Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.923819696Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.923841402Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.923855505Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.923868008Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.923888813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.923903016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.923915919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.923930122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.923942625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.923956028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.923967831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.923981334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.923995337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.924010041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.924021944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.924035147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.924048550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.924064154Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.924084658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.924097161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.924108664Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.924176480Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.924383428Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.924590377Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.925091294Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.925346153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.925528496Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.925546700Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.925842870Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.926097529Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.926149141Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 17:29:33 image-318300 dockerd[674]: time="2024-04-16T17:29:33.926179148Z" level=info msg="containerd successfully booted in 0.054849s"
	Apr 16 17:29:34 image-318300 dockerd[668]: time="2024-04-16T17:29:34.893830237Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 17:29:34 image-318300 dockerd[668]: time="2024-04-16T17:29:34.908225333Z" level=info msg="Loading containers: start."
	Apr 16 17:29:35 image-318300 dockerd[668]: time="2024-04-16T17:29:35.130659908Z" level=info msg="Loading containers: done."
	Apr 16 17:29:35 image-318300 dockerd[668]: time="2024-04-16T17:29:35.147265669Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 17:29:35 image-318300 dockerd[668]: time="2024-04-16T17:29:35.147496419Z" level=info msg="Daemon has completed initialization"
	Apr 16 17:29:35 image-318300 dockerd[668]: time="2024-04-16T17:29:35.212243805Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 17:29:35 image-318300 dockerd[668]: time="2024-04-16T17:29:35.212629288Z" level=info msg="API listen on [::]:2376"
	Apr 16 17:29:35 image-318300 systemd[1]: Started Docker Application Container Engine.
	Apr 16 17:30:02 image-318300 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 17:30:02 image-318300 dockerd[668]: time="2024-04-16T17:30:02.987154254Z" level=info msg="Processing signal 'terminated'"
	Apr 16 17:30:02 image-318300 dockerd[668]: time="2024-04-16T17:30:02.989273757Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 17:30:02 image-318300 dockerd[668]: time="2024-04-16T17:30:02.990429113Z" level=info msg="Daemon shutdown complete"
	Apr 16 17:30:02 image-318300 dockerd[668]: time="2024-04-16T17:30:02.990736028Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 17:30:02 image-318300 dockerd[668]: time="2024-04-16T17:30:02.990900236Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 17:30:03 image-318300 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 17:30:03 image-318300 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 17:30:04 image-318300 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 17:30:04 image-318300 dockerd[1025]: time="2024-04-16T17:30:04.067890780Z" level=info msg="Starting up"
	Apr 16 17:31:04 image-318300 dockerd[1025]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 16 17:31:04 image-318300 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 16 17:31:04 image-318300 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 16 17:31:04 image-318300 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p image-318300 --driver=hyperv" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p image-318300 -n image-318300
E0416 17:31:06.920617    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p image-318300 -n image-318300: exit status 6 (11.2108858s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 17:31:04.122359    1844 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0416 17:31:15.168890    1844 status.go:417] kubeconfig endpoint: get endpoint: "image-318300" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "image-318300" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestImageBuild/serial/Setup (210.44s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (176.96s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-738600
E0416 17:51:06.986373    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
mount_start_test.go:166: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p mount-start-2-738600: exit status 90 (2m45.8412519s)

                                                
                                                
-- stdout --
	* [mount-start-2-738600] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting minikube without Kubernetes in cluster mount-start-2-738600
	* Restarting existing hyperv VM for "mount-start-2-738600" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 17:50:38.409924   10284 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 16 17:51:58 mount-start-2-738600 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 17:51:58 mount-start-2-738600 dockerd[648]: time="2024-04-16T17:51:58.124062436Z" level=info msg="Starting up"
	Apr 16 17:51:58 mount-start-2-738600 dockerd[648]: time="2024-04-16T17:51:58.125191017Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 17:51:58 mount-start-2-738600 dockerd[648]: time="2024-04-16T17:51:58.126291392Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=654
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.159597897Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.186382875Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.186513808Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.186584226Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.186600830Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.187254393Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.187433237Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.187884150Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.188049991Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.188085800Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.188106005Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.188600929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.189351516Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.192555515Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.192657240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.192829183Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.192916805Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.193447237Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.193648287Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.193665791Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.195673492Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.195743810Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.195764115Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.195780119Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.195795723Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.195864940Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.196296547Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.196532506Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.196555312Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.196568815Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.196582619Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.196595422Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.196607025Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.196620328Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.196633732Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.196646835Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.196658538Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.196669240Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.196696247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.196742459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.196768265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.196780868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.196792171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.196803874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.196814577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.196826380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.196838983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.196852886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.196864089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.196875292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.196886395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.196900798Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.196919703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.196931106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.196952311Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.197012426Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.197224779Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.197243684Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.197256787Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.197321003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.197403824Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.197425029Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.197805824Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.197869540Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.198100297Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 17:51:58 mount-start-2-738600 dockerd[654]: time="2024-04-16T17:51:58.198211825Z" level=info msg="containerd successfully booted in 0.041156s"
	Apr 16 17:51:59 mount-start-2-738600 dockerd[648]: time="2024-04-16T17:51:59.164679722Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 17:51:59 mount-start-2-738600 dockerd[648]: time="2024-04-16T17:51:59.182942355Z" level=info msg="Loading containers: start."
	Apr 16 17:51:59 mount-start-2-738600 dockerd[648]: time="2024-04-16T17:51:59.392126211Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 16 17:51:59 mount-start-2-738600 dockerd[648]: time="2024-04-16T17:51:59.455309261Z" level=info msg="Loading containers: done."
	Apr 16 17:51:59 mount-start-2-738600 dockerd[648]: time="2024-04-16T17:51:59.470633535Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 17:51:59 mount-start-2-738600 dockerd[648]: time="2024-04-16T17:51:59.471903840Z" level=info msg="Daemon has completed initialization"
	Apr 16 17:51:59 mount-start-2-738600 dockerd[648]: time="2024-04-16T17:51:59.511620763Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 17:51:59 mount-start-2-738600 dockerd[648]: time="2024-04-16T17:51:59.511704683Z" level=info msg="API listen on [::]:2376"
	Apr 16 17:51:59 mount-start-2-738600 systemd[1]: Started Docker Application Container Engine.
	Apr 16 17:52:23 mount-start-2-738600 dockerd[648]: time="2024-04-16T17:52:23.151681061Z" level=info msg="Processing signal 'terminated'"
	Apr 16 17:52:23 mount-start-2-738600 dockerd[648]: time="2024-04-16T17:52:23.153204350Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 17:52:23 mount-start-2-738600 dockerd[648]: time="2024-04-16T17:52:23.153363060Z" level=info msg="Daemon shutdown complete"
	Apr 16 17:52:23 mount-start-2-738600 dockerd[648]: time="2024-04-16T17:52:23.153449765Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 17:52:23 mount-start-2-738600 dockerd[648]: time="2024-04-16T17:52:23.153461565Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 17:52:23 mount-start-2-738600 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 17:52:24 mount-start-2-738600 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 17:52:24 mount-start-2-738600 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 17:52:24 mount-start-2-738600 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 17:52:24 mount-start-2-738600 dockerd[1024]: time="2024-04-16T17:52:24.223138806Z" level=info msg="Starting up"
	Apr 16 17:53:24 mount-start-2-738600 dockerd[1024]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 16 17:53:24 mount-start-2-738600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 16 17:53:24 mount-start-2-738600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 16 17:53:24 mount-start-2-738600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:168: restart failed: "out/minikube-windows-amd64.exe start -p mount-start-2-738600" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-2-738600 -n mount-start-2-738600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-2-738600 -n mount-start-2-738600: exit status 6 (11.1194739s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 17:53:24.266249    4224 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0416 17:53:35.227165    4224 status.go:417] kubeconfig endpoint: get endpoint: "mount-start-2-738600" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-2-738600" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/RestartStopped (176.96s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (52.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-945500 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-945500 -- exec busybox-7fdf7869d9-jxvx2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-945500 -- exec busybox-7fdf7869d9-jxvx2 -- sh -c "ping -c 1 172.19.80.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-945500 -- exec busybox-7fdf7869d9-jxvx2 -- sh -c "ping -c 1 172.19.80.1": exit status 1 (10.4248291s)

                                                
                                                
-- stdout --
	PING 172.19.80.1 (172.19.80.1): 56 data bytes
	
	--- 172.19.80.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 18:01:13.151867    8672 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.19.80.1) from pod (busybox-7fdf7869d9-jxvx2): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-945500 -- exec busybox-7fdf7869d9-ns8nx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-945500 -- exec busybox-7fdf7869d9-ns8nx -- sh -c "ping -c 1 172.19.80.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-945500 -- exec busybox-7fdf7869d9-ns8nx -- sh -c "ping -c 1 172.19.80.1": exit status 1 (10.4324246s)

                                                
                                                
-- stdout --
	PING 172.19.80.1 (172.19.80.1): 56 data bytes
	
	--- 172.19.80.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 18:01:24.024238    4836 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.19.80.1) from pod (busybox-7fdf7869d9-ns8nx): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-945500 -n multinode-945500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-945500 -n multinode-945500: (10.8467645s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-945500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-945500 logs -n 25: (7.5302199s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| start   | -p mount-start-2-738600                           | mount-start-2-738600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:47 UTC | 16 Apr 24 17:49 UTC |
	|         | --memory=2048 --mount                             |                      |                   |                |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |                   |                |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |                   |                |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |                   |                |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |                |                     |                     |
	| mount   | C:\Users\jenkins.minikube5:/minikube-host         | mount-start-2-738600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:49 UTC |                     |
	|         | --profile mount-start-2-738600 --v 0              |                      |                   |                |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |                |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |                |                     |                     |
	|         |                                                 0 |                      |                   |                |                     |                     |
	| ssh     | mount-start-2-738600 ssh -- ls                    | mount-start-2-738600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:49 UTC | 16 Apr 24 17:49 UTC |
	|         | /minikube-host                                    |                      |                   |                |                     |                     |
	| delete  | -p mount-start-1-738600                           | mount-start-1-738600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:49 UTC | 16 Apr 24 17:50 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |                |                     |                     |
	| ssh     | mount-start-2-738600 ssh -- ls                    | mount-start-2-738600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:50 UTC | 16 Apr 24 17:50 UTC |
	|         | /minikube-host                                    |                      |                   |                |                     |                     |
	| stop    | -p mount-start-2-738600                           | mount-start-2-738600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:50 UTC | 16 Apr 24 17:50 UTC |
	| start   | -p mount-start-2-738600                           | mount-start-2-738600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:50 UTC |                     |
	| delete  | -p mount-start-2-738600                           | mount-start-2-738600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:53 UTC | 16 Apr 24 17:54 UTC |
	| delete  | -p mount-start-1-738600                           | mount-start-1-738600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:54 UTC | 16 Apr 24 17:54 UTC |
	| start   | -p multinode-945500                               | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:54 UTC | 16 Apr 24 18:00 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |                |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |                |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |                |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- apply -f                   | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- rollout                    | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | status deployment/busybox                         |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- get pods -o                | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- get pods -o                | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-jxvx2 --                       |                      |                   |                |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-ns8nx --                       |                      |                   |                |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-jxvx2 --                       |                      |                   |                |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-ns8nx --                       |                      |                   |                |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-jxvx2 -- nslookup              |                      |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-ns8nx -- nslookup              |                      |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- get pods -o                | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-jxvx2                          |                      |                   |                |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |                |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC |                     |
	|         | busybox-7fdf7869d9-jxvx2 -- sh                    |                      |                   |                |                     |                     |
	|         | -c ping -c 1 172.19.80.1                          |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-ns8nx                          |                      |                   |                |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |                |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC |                     |
	|         | busybox-7fdf7869d9-ns8nx -- sh                    |                      |                   |                |                     |                     |
	|         | -c ping -c 1 172.19.80.1                          |                      |                   |                |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 17:54:38
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 17:54:38.458993    6988 out.go:291] Setting OutFile to fd 960 ...
	I0416 17:54:38.459581    6988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:54:38.459581    6988 out.go:304] Setting ErrFile to fd 676...
	I0416 17:54:38.459678    6988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:54:38.483191    6988 out.go:298] Setting JSON to false
	I0416 17:54:38.487192    6988 start.go:129] hostinfo: {"hostname":"minikube5","uptime":27708,"bootTime":1713262370,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0416 17:54:38.487192    6988 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0416 17:54:38.488186    6988 out.go:177] * [multinode-945500] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0416 17:54:38.489188    6988 notify.go:220] Checking for updates...
	I0416 17:54:38.489188    6988 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 17:54:38.490185    6988 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 17:54:38.490185    6988 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0416 17:54:38.491184    6988 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 17:54:38.491184    6988 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 17:54:38.493214    6988 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 17:54:43.355603    6988 out.go:177] * Using the hyperv driver based on user configuration
	I0416 17:54:43.356197    6988 start.go:297] selected driver: hyperv
	I0416 17:54:43.356197    6988 start.go:901] validating driver "hyperv" against <nil>
	I0416 17:54:43.356273    6988 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 17:54:43.396166    6988 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 17:54:43.397176    6988 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 17:54:43.397504    6988 cni.go:84] Creating CNI manager for ""
	I0416 17:54:43.397537    6988 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0416 17:54:43.397537    6988 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0416 17:54:43.397711    6988 start.go:340] cluster config:
	{Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:54:43.397711    6988 iso.go:125] acquiring lock: {Name:mka3f8eef32f5becd06d05d1d837c2a92a8fa70c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:54:43.399183    6988 out.go:177] * Starting "multinode-945500" primary control-plane node in "multinode-945500" cluster
	I0416 17:54:43.399538    6988 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 17:54:43.399538    6988 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0416 17:54:43.399538    6988 cache.go:56] Caching tarball of preloaded images
	I0416 17:54:43.399538    6988 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 17:54:43.400205    6988 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 17:54:43.400795    6988 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 17:54:43.401059    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json: {Name:mk67f15eab35e69a3277eb33417238e6d320045f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:54:43.401506    6988 start.go:360] acquireMachinesLock for multinode-945500: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 17:54:43.402049    6988 start.go:364] duration metric: took 542.9µs to acquireMachinesLock for "multinode-945500"
	I0416 17:54:43.402113    6988 start.go:93] Provisioning new machine with config: &{Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 17:54:43.402113    6988 start.go:125] createHost starting for "" (driver="hyperv")
	I0416 17:54:43.403221    6988 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 17:54:43.403542    6988 start.go:159] libmachine.API.Create for "multinode-945500" (driver="hyperv")
	I0416 17:54:43.403595    6988 client.go:168] LocalClient.Create starting
	I0416 17:54:43.404086    6988 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0416 17:54:43.404276    6988 main.go:141] libmachine: Decoding PEM data...
	I0416 17:54:43.404276    6988 main.go:141] libmachine: Parsing certificate...
	I0416 17:54:43.404276    6988 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0416 17:54:43.404276    6988 main.go:141] libmachine: Decoding PEM data...
	I0416 17:54:43.404276    6988 main.go:141] libmachine: Parsing certificate...
	I0416 17:54:43.404276    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0416 17:54:45.288246    6988 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0416 17:54:45.288342    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:54:45.288493    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0416 17:54:46.922912    6988 main.go:141] libmachine: [stdout =====>] : False
	
	I0416 17:54:46.922912    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:54:46.923010    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 17:54:48.270889    6988 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 17:54:48.270889    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:54:48.270889    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 17:54:51.466825    6988 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 17:54:51.466825    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:54:51.468671    6988 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 17:54:51.806641    6988 main.go:141] libmachine: Creating SSH key...
	I0416 17:54:52.035351    6988 main.go:141] libmachine: Creating VM...
	I0416 17:54:52.036345    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 17:54:54.656446    6988 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 17:54:54.656494    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:54:54.656633    6988 main.go:141] libmachine: Using switch "Default Switch"
	I0416 17:54:54.656633    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 17:54:56.229378    6988 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 17:54:56.229607    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:54:56.229607    6988 main.go:141] libmachine: Creating VHD
	I0416 17:54:56.229607    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0416 17:54:59.733727    6988 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5A486C23-0EFD-43D1-8BEB-4A60ACE1DF98
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0416 17:54:59.733800    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:54:59.733873    6988 main.go:141] libmachine: Writing magic tar header
	I0416 17:54:59.733915    6988 main.go:141] libmachine: Writing SSH key tar header
	I0416 17:54:59.741031    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0416 17:55:02.758991    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:02.758991    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:02.759271    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\disk.vhd' -SizeBytes 20000MB
	I0416 17:55:05.056217    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:05.056217    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:05.057316    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-945500 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0416 17:55:08.311574    6988 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-945500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0416 17:55:08.311574    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:08.311863    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-945500 -DynamicMemoryEnabled $false
	I0416 17:55:10.388584    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:10.389586    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:10.389586    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-945500 -Count 2
	I0416 17:55:12.413711    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:12.413711    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:12.414332    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-945500 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\boot2docker.iso'
	I0416 17:55:14.741711    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:14.741711    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:14.741711    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-945500 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\disk.vhd'
	I0416 17:55:17.003645    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:17.003645    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:17.003645    6988 main.go:141] libmachine: Starting VM...
	I0416 17:55:17.003645    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-945500
	I0416 17:55:19.573472    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:19.573700    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:19.573700    6988 main.go:141] libmachine: Waiting for host to start...
	I0416 17:55:19.573790    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:21.624051    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:21.624051    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:21.624771    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:55:23.884692    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:23.884692    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:24.892318    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:26.899190    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:26.899190    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:26.899348    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:55:29.176655    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:29.176655    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:30.177215    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:32.143102    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:32.143102    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:32.143464    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:55:34.404986    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:34.405261    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:35.419315    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:37.438553    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:37.438958    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:37.438958    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:55:39.692795    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:39.692795    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:40.700997    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:42.744138    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:42.744982    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:42.745064    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:55:45.083348    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:55:45.083348    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:45.083448    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:47.049900    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:47.050444    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:47.050523    6988 machine.go:94] provisionDockerMachine start ...
	I0416 17:55:47.050566    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:49.000414    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:49.000414    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:49.000537    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:55:51.284377    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:55:51.285296    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:51.290721    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:55:51.303784    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.227 22 <nil> <nil>}
	I0416 17:55:51.303784    6988 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 17:55:51.430251    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 17:55:51.430320    6988 buildroot.go:166] provisioning hostname "multinode-945500"
	I0416 17:55:51.430320    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:53.414239    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:53.414239    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:53.414512    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:55:55.729573    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:55:55.729573    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:55.733714    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:55:55.734245    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.227 22 <nil> <nil>}
	I0416 17:55:55.734245    6988 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-945500 && echo "multinode-945500" | sudo tee /etc/hostname
	I0416 17:55:55.888906    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-945500
	
	I0416 17:55:55.888975    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:57.782302    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:57.782302    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:57.782786    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:00.073834    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:00.073834    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:00.078560    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:56:00.078657    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.227 22 <nil> <nil>}
	I0416 17:56:00.078657    6988 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-945500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-945500/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-945500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 17:56:00.230030    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 17:56:00.230079    6988 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 17:56:00.230079    6988 buildroot.go:174] setting up certificates
	I0416 17:56:00.230079    6988 provision.go:84] configureAuth start
	I0416 17:56:00.230182    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:02.147449    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:02.147449    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:02.147449    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:04.449327    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:04.450388    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:04.450388    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:06.443860    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:06.443860    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:06.444760    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:08.814817    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:08.814817    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:08.814817    6988 provision.go:143] copyHostCerts
	I0416 17:56:08.815787    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 17:56:08.816004    6988 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 17:56:08.816004    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 17:56:08.816371    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 17:56:08.817376    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 17:56:08.817582    6988 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 17:56:08.817582    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 17:56:08.817582    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 17:56:08.818480    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 17:56:08.818480    6988 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 17:56:08.818480    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 17:56:08.819278    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 17:56:08.820184    6988 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-945500 san=[127.0.0.1 172.19.91.227 localhost minikube multinode-945500]
	I0416 17:56:09.120922    6988 provision.go:177] copyRemoteCerts
	I0416 17:56:09.129891    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 17:56:09.129891    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:11.105788    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:11.105788    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:11.105788    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:13.452243    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:13.452243    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:13.452604    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 17:56:13.553822    6988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.42368s)
	I0416 17:56:13.553822    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 17:56:13.553822    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 17:56:13.595187    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 17:56:13.595187    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0416 17:56:13.635052    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 17:56:13.635528    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 17:56:13.675952    6988 provision.go:87] duration metric: took 13.4440865s to configureAuth
	I0416 17:56:13.676049    6988 buildroot.go:189] setting minikube options for container-runtime
	I0416 17:56:13.676421    6988 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 17:56:13.676504    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:15.610838    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:15.610926    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:15.610926    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:17.912484    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:17.913491    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:17.916946    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:56:17.917531    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.227 22 <nil> <nil>}
	I0416 17:56:17.917531    6988 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 17:56:18.061063    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 17:56:18.061063    6988 buildroot.go:70] root file system type: tmpfs
	I0416 17:56:18.061690    6988 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 17:56:18.061690    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:20.049603    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:20.049603    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:20.049978    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:22.383521    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:22.383521    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:22.387896    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:56:22.388601    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.227 22 <nil> <nil>}
	I0416 17:56:22.388601    6988 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 17:56:22.561164    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 17:56:22.561269    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:24.443674    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:24.444091    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:24.444193    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:26.758959    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:26.758959    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:26.765429    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:56:26.765429    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.227 22 <nil> <nil>}
	I0416 17:56:26.765957    6988 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 17:56:28.704221    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 17:56:28.704221    6988 machine.go:97] duration metric: took 41.6513356s to provisionDockerMachine
	I0416 17:56:28.704317    6988 client.go:171] duration metric: took 1m45.2947032s to LocalClient.Create
	I0416 17:56:28.704398    6988 start.go:167] duration metric: took 1m45.2948041s to libmachine.API.Create "multinode-945500"
	I0416 17:56:28.704398    6988 start.go:293] postStartSetup for "multinode-945500" (driver="hyperv")
	I0416 17:56:28.704489    6988 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 17:56:28.714148    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 17:56:28.714148    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:30.638973    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:30.638973    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:30.639089    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:32.961564    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:32.961564    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:32.961564    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 17:56:33.069322    6988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3549265s)
	I0416 17:56:33.078710    6988 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 17:56:33.085331    6988 command_runner.go:130] > NAME=Buildroot
	I0416 17:56:33.085331    6988 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0416 17:56:33.085331    6988 command_runner.go:130] > ID=buildroot
	I0416 17:56:33.085331    6988 command_runner.go:130] > VERSION_ID=2023.02.9
	I0416 17:56:33.085331    6988 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0416 17:56:33.086070    6988 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 17:56:33.086171    6988 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 17:56:33.086945    6988 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 17:56:33.088129    6988 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 17:56:33.088129    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 17:56:33.106615    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 17:56:33.129263    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 17:56:33.174677    6988 start.go:296] duration metric: took 4.469934s for postStartSetup
	I0416 17:56:33.177364    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:35.133709    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:35.133709    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:35.133796    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:37.452577    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:37.452577    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:37.453529    6988 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 17:56:37.455914    6988 start.go:128] duration metric: took 1m54.0472303s to createHost
	I0416 17:56:37.455914    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:39.425449    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:39.425449    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:39.426011    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:41.744115    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:41.744115    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:41.748497    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:56:41.748631    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.227 22 <nil> <nil>}
	I0416 17:56:41.748631    6988 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 17:56:41.875115    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713290202.039643702
	
	I0416 17:56:41.875272    6988 fix.go:216] guest clock: 1713290202.039643702
	I0416 17:56:41.875272    6988 fix.go:229] Guest: 2024-04-16 17:56:42.039643702 +0000 UTC Remote: 2024-04-16 17:56:37.4559145 +0000 UTC m=+119.121500601 (delta=4.583729202s)
	I0416 17:56:41.875399    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:43.872191    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:43.873117    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:43.873117    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:46.207797    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:46.207797    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:46.213575    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:56:46.213575    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.227 22 <nil> <nil>}
	I0416 17:56:46.213575    6988 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713290201
	I0416 17:56:46.370971    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 17:56:41 UTC 2024
	
	I0416 17:56:46.370971    6988 fix.go:236] clock set: Tue Apr 16 17:56:41 UTC 2024
	 (err=<nil>)
	I0416 17:56:46.371058    6988 start.go:83] releasing machines lock for "multinode-945500", held for 2m2.9620339s
	I0416 17:56:46.371284    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:48.308157    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:48.308984    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:48.309041    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:50.575031    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:50.575031    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:50.579218    6988 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 17:56:50.579218    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:50.586441    6988 ssh_runner.go:195] Run: cat /version.json
	I0416 17:56:50.586979    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:52.634472    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:52.634472    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:52.634472    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:52.639621    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:52.639621    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:52.639621    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:55.047917    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:55.048488    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:55.048917    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 17:56:55.065759    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:55.066462    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:55.066602    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 17:56:55.354145    6988 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0416 17:56:55.354145    6988 command_runner.go:130] > {"iso_version": "v1.33.0-1713236417-18649", "kicbase_version": "v0.0.43-1713215244-18647", "minikube_version": "v1.33.0-beta.0", "commit": "4ec1a3e88a9f3ffb3930e555284d907468ae83a6"}
	I0416 17:56:55.354145    6988 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.7746557s)
	I0416 17:56:55.354145    6988 ssh_runner.go:235] Completed: cat /version.json: (4.7668953s)
	I0416 17:56:55.366453    6988 ssh_runner.go:195] Run: systemctl --version
	I0416 17:56:55.375220    6988 command_runner.go:130] > systemd 252 (252)
	I0416 17:56:55.375220    6988 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0416 17:56:55.384285    6988 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0416 17:56:55.392020    6988 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0416 17:56:55.392567    6988 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 17:56:55.401209    6988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 17:56:55.426637    6988 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0416 17:56:55.427403    6988 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 17:56:55.427503    6988 start.go:494] detecting cgroup driver to use...
	I0416 17:56:55.427534    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 17:56:55.457110    6988 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0416 17:56:55.470104    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 17:56:55.494070    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 17:56:55.511268    6988 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 17:56:55.523954    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 17:56:55.549161    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 17:56:55.576216    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 17:56:55.602400    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 17:56:55.630572    6988 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 17:56:55.656816    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 17:56:55.683825    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 17:56:55.710767    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 17:56:55.737864    6988 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 17:56:55.753678    6988 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0416 17:56:55.761926    6988 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 17:56:55.794919    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:56:55.964839    6988 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 17:56:55.993258    6988 start.go:494] detecting cgroup driver to use...
	I0416 17:56:56.002807    6988 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 17:56:56.020460    6988 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0416 17:56:56.020914    6988 command_runner.go:130] > [Unit]
	I0416 17:56:56.020998    6988 command_runner.go:130] > Description=Docker Application Container Engine
	I0416 17:56:56.020998    6988 command_runner.go:130] > Documentation=https://docs.docker.com
	I0416 17:56:56.020998    6988 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0416 17:56:56.020998    6988 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0416 17:56:56.021071    6988 command_runner.go:130] > StartLimitBurst=3
	I0416 17:56:56.021071    6988 command_runner.go:130] > StartLimitIntervalSec=60
	I0416 17:56:56.021071    6988 command_runner.go:130] > [Service]
	I0416 17:56:56.021071    6988 command_runner.go:130] > Type=notify
	I0416 17:56:56.021071    6988 command_runner.go:130] > Restart=on-failure
	I0416 17:56:56.021071    6988 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0416 17:56:56.021156    6988 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0416 17:56:56.021156    6988 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0416 17:56:56.021156    6988 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0416 17:56:56.021241    6988 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0416 17:56:56.021281    6988 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0416 17:56:56.021354    6988 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0416 17:56:56.021427    6988 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0416 17:56:56.021427    6988 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0416 17:56:56.021427    6988 command_runner.go:130] > ExecStart=
	I0416 17:56:56.021508    6988 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0416 17:56:56.021508    6988 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0416 17:56:56.021586    6988 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0416 17:56:56.021586    6988 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0416 17:56:56.021663    6988 command_runner.go:130] > LimitNOFILE=infinity
	I0416 17:56:56.021663    6988 command_runner.go:130] > LimitNPROC=infinity
	I0416 17:56:56.021663    6988 command_runner.go:130] > LimitCORE=infinity
	I0416 17:56:56.021663    6988 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0416 17:56:56.021663    6988 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0416 17:56:56.021738    6988 command_runner.go:130] > TasksMax=infinity
	I0416 17:56:56.021738    6988 command_runner.go:130] > TimeoutStartSec=0
	I0416 17:56:56.021738    6988 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0416 17:56:56.021738    6988 command_runner.go:130] > Delegate=yes
	I0416 17:56:56.021738    6988 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0416 17:56:56.021811    6988 command_runner.go:130] > KillMode=process
	I0416 17:56:56.021811    6988 command_runner.go:130] > [Install]
	I0416 17:56:56.021811    6988 command_runner.go:130] > WantedBy=multi-user.target
	I0416 17:56:56.032694    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 17:56:56.060059    6988 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 17:56:56.101716    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 17:56:56.131287    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 17:56:56.163190    6988 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 17:56:56.210983    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 17:56:56.231971    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 17:56:56.261397    6988 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0416 17:56:56.272666    6988 ssh_runner.go:195] Run: which cri-dockerd
	I0416 17:56:56.276995    6988 command_runner.go:130] > /usr/bin/cri-dockerd
	I0416 17:56:56.286591    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 17:56:56.299870    6988 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 17:56:56.337571    6988 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 17:56:56.500406    6988 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 17:56:56.646617    6988 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 17:56:56.646617    6988 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 17:56:56.690996    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:56:56.871261    6988 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 17:56:59.295937    6988 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4242935s)
	I0416 17:56:59.304599    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0416 17:56:59.333610    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 17:56:59.361657    6988 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0416 17:56:59.541548    6988 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0416 17:56:59.705672    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:56:59.866404    6988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0416 17:56:59.907640    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 17:56:59.939748    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:57:00.107406    6988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0416 17:57:00.200852    6988 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0416 17:57:00.212214    6988 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0416 17:57:00.220777    6988 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0416 17:57:00.220777    6988 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0416 17:57:00.220777    6988 command_runner.go:130] > Device: 0,22	Inode: 885         Links: 1
	I0416 17:57:00.220777    6988 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0416 17:57:00.220777    6988 command_runner.go:130] > Access: 2024-04-16 17:57:00.296362377 +0000
	I0416 17:57:00.220777    6988 command_runner.go:130] > Modify: 2024-04-16 17:57:00.296362377 +0000
	I0416 17:57:00.220777    6988 command_runner.go:130] > Change: 2024-04-16 17:57:00.300362562 +0000
	I0416 17:57:00.220777    6988 command_runner.go:130] >  Birth: -
	I0416 17:57:00.220777    6988 start.go:562] Will wait 60s for crictl version
	I0416 17:57:00.230775    6988 ssh_runner.go:195] Run: which crictl
	I0416 17:57:00.235786    6988 command_runner.go:130] > /usr/bin/crictl
	I0416 17:57:00.245023    6988 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 17:57:00.292622    6988 command_runner.go:130] > Version:  0.1.0
	I0416 17:57:00.292622    6988 command_runner.go:130] > RuntimeName:  docker
	I0416 17:57:00.292622    6988 command_runner.go:130] > RuntimeVersion:  26.0.1
	I0416 17:57:00.292739    6988 command_runner.go:130] > RuntimeApiVersion:  v1
	I0416 17:57:00.292794    6988 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0416 17:57:00.301388    6988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 17:57:00.331067    6988 command_runner.go:130] > 26.0.1
	I0416 17:57:00.337439    6988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 17:57:00.365025    6988 command_runner.go:130] > 26.0.1
	I0416 17:57:00.367212    6988 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
	I0416 17:57:00.367413    6988 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0416 17:57:00.371515    6988 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0416 17:57:00.371597    6988 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0416 17:57:00.371597    6988 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0416 17:57:00.371597    6988 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:6f:a4 Flags:up|broadcast|multicast|running}
	I0416 17:57:00.374158    6988 ip.go:210] interface addr: fe80::6b96:eca7:5afa:def5/64
	I0416 17:57:00.374158    6988 ip.go:210] interface addr: 172.19.80.1/20
	I0416 17:57:00.380883    6988 ssh_runner.go:195] Run: grep 172.19.80.1	host.minikube.internal$ /etc/hosts
	I0416 17:57:00.386921    6988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 17:57:00.407839    6988 kubeadm.go:877] updating cluster {Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.29.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.91.227 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 17:57:00.407839    6988 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 17:57:00.416191    6988 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 17:57:00.437198    6988 docker.go:685] Got preloaded images: 
	I0416 17:57:00.437198    6988 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0416 17:57:00.446472    6988 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0416 17:57:00.461564    6988 command_runner.go:139] > {"Repositories":{}}
	I0416 17:57:00.472373    6988 ssh_runner.go:195] Run: which lz4
	I0416 17:57:00.477412    6988 command_runner.go:130] > /usr/bin/lz4
	I0416 17:57:00.477412    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0416 17:57:00.487276    6988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 17:57:00.492861    6988 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 17:57:00.493543    6988 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 17:57:00.493600    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0416 17:57:01.970587    6988 docker.go:649] duration metric: took 1.4924844s to copy over tarball
	I0416 17:57:01.979028    6988 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 17:57:10.810575    6988 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.831045s)
	I0416 17:57:10.810689    6988 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 17:57:10.875450    6988 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0416 17:57:10.895935    6988 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.29.3":"sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533","registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c":"sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.29.3":"sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3","registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104":"sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.29.3":"sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392","registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863":"sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b
5bbe4f71784e392"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.29.3":"sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b","registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a":"sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0416 17:57:10.895935    6988 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0416 17:57:10.938742    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:57:11.136149    6988 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 17:57:13.733531    6988 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5972349s)
	I0416 17:57:13.742898    6988 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 17:57:13.765918    6988 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.29.3
	I0416 17:57:13.765918    6988 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.29.3
	I0416 17:57:13.765918    6988 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.29.3
	I0416 17:57:13.765918    6988 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.29.3
	I0416 17:57:13.765918    6988 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0416 17:57:13.765918    6988 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0416 17:57:13.765918    6988 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0416 17:57:13.765918    6988 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:57:13.765918    6988 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0416 17:57:13.765918    6988 cache_images.go:84] Images are preloaded, skipping loading
	I0416 17:57:13.765918    6988 kubeadm.go:928] updating node { 172.19.91.227 8443 v1.29.3 docker true true} ...
	I0416 17:57:13.766906    6988 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-945500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.91.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 17:57:13.774901    6988 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0416 17:57:13.804585    6988 command_runner.go:130] > cgroupfs
	I0416 17:57:13.804682    6988 cni.go:84] Creating CNI manager for ""
	I0416 17:57:13.804682    6988 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0416 17:57:13.804682    6988 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 17:57:13.804682    6988 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.91.227 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-945500 NodeName:multinode-945500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.91.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.91.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 17:57:13.804682    6988 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.91.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-945500"
	  kubeletExtraArgs:
	    node-ip: 172.19.91.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.91.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 17:57:13.813761    6988 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 17:57:13.830081    6988 command_runner.go:130] > kubeadm
	I0416 17:57:13.830081    6988 command_runner.go:130] > kubectl
	I0416 17:57:13.830081    6988 command_runner.go:130] > kubelet
	I0416 17:57:13.830165    6988 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 17:57:13.838770    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 17:57:13.852826    6988 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0416 17:57:13.878799    6988 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 17:57:13.905862    6988 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0416 17:57:13.943017    6988 ssh_runner.go:195] Run: grep 172.19.91.227	control-plane.minikube.internal$ /etc/hosts
	I0416 17:57:13.949214    6988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.91.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 17:57:13.980273    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:57:14.153644    6988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 17:57:14.177658    6988 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500 for IP: 172.19.91.227
	I0416 17:57:14.178687    6988 certs.go:194] generating shared ca certs ...
	I0416 17:57:14.178687    6988 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:14.179455    6988 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0416 17:57:14.179902    6988 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0416 17:57:14.180190    6988 certs.go:256] generating profile certs ...
	I0416 17:57:14.180755    6988 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\client.key
	I0416 17:57:14.180755    6988 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\client.crt with IP's: []
	I0416 17:57:14.411174    6988 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\client.crt ...
	I0416 17:57:14.411174    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\client.crt: {Name:mkc0623b015c4c96d85b8b3b13eb2cc6d3ac8763 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:14.412171    6988 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\client.key ...
	I0416 17:57:14.412171    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\client.key: {Name:mkbd9c01c6892e02b0a8d9c434e98a742e87c2cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:14.413058    6988 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key.e3ea85af
	I0416 17:57:14.414154    6988 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt.e3ea85af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.91.227]
	I0416 17:57:14.575473    6988 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt.e3ea85af ...
	I0416 17:57:14.575473    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt.e3ea85af: {Name:mk62c37573433811afa986b89a237b6c7bb0d1df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:14.576358    6988 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key.e3ea85af ...
	I0416 17:57:14.576358    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key.e3ea85af: {Name:mk6c23ff826064c327d5a977affe1877b10d9b72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:14.577574    6988 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt.e3ea85af -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt
	I0416 17:57:14.590486    6988 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key.e3ea85af -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key
	I0416 17:57:14.590795    6988 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.key
	I0416 17:57:14.590795    6988 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.crt with IP's: []
	I0416 17:57:14.794779    6988 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.crt ...
	I0416 17:57:14.795779    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.crt: {Name:mk40c9063a89a73b56bd4ccd89e15d6559ba1e37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:14.796782    6988 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.key ...
	I0416 17:57:14.796782    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.key: {Name:mk5e95084b6a4adeb7806da3f2d851d8919dced5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:14.798528    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 17:57:14.798760    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0416 17:57:14.799041    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 17:57:14.799237    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 17:57:14.799423    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0416 17:57:14.799630    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0416 17:57:14.799827    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0416 17:57:14.806003    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0416 17:57:14.809977    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem (1338 bytes)
	W0416 17:57:14.809977    6988 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324_empty.pem, impossibly tiny 0 bytes
	I0416 17:57:14.809977    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0416 17:57:14.809977    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0416 17:57:14.811027    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0416 17:57:14.811027    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0416 17:57:14.811027    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem (1708 bytes)
	I0416 17:57:14.811551    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /usr/share/ca-certificates/53242.pem
	I0416 17:57:14.811650    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:57:14.811737    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem -> /usr/share/ca-certificates/5324.pem
	I0416 17:57:14.812935    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 17:57:14.852949    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 17:57:14.891959    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 17:57:14.931152    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 17:57:14.968412    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0416 17:57:15.008983    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 17:57:15.048515    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 17:57:15.089091    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 17:57:15.125356    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /usr/share/ca-certificates/53242.pem (1708 bytes)
	I0416 17:57:15.162621    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 17:57:15.205246    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem --> /usr/share/ca-certificates/5324.pem (1338 bytes)
	I0416 17:57:15.248985    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 17:57:15.289002    6988 ssh_runner.go:195] Run: openssl version
	I0416 17:57:15.296351    6988 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0416 17:57:15.308333    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 17:57:15.335334    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:57:15.341349    6988 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 16 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:57:15.342189    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:57:15.351026    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:57:15.358591    6988 command_runner.go:130] > b5213941
	I0416 17:57:15.367034    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 17:57:15.391467    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5324.pem && ln -fs /usr/share/ca-certificates/5324.pem /etc/ssl/certs/5324.pem"
	I0416 17:57:15.416387    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5324.pem
	I0416 17:57:15.423831    6988 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 16 16:35 /usr/share/ca-certificates/5324.pem
	I0416 17:57:15.423957    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:35 /usr/share/ca-certificates/5324.pem
	I0416 17:57:15.434442    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5324.pem
	I0416 17:57:15.442459    6988 command_runner.go:130] > 51391683
	I0416 17:57:15.451530    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5324.pem /etc/ssl/certs/51391683.0"
	I0416 17:57:15.480393    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/53242.pem && ln -fs /usr/share/ca-certificates/53242.pem /etc/ssl/certs/53242.pem"
	I0416 17:57:15.509124    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/53242.pem
	I0416 17:57:15.515721    6988 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 16 16:35 /usr/share/ca-certificates/53242.pem
	I0416 17:57:15.515827    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:35 /usr/share/ca-certificates/53242.pem
	I0416 17:57:15.524021    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/53242.pem
	I0416 17:57:15.533694    6988 command_runner.go:130] > 3ec20f2e
	I0416 17:57:15.541647    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/53242.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 17:57:15.567570    6988 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 17:57:15.573415    6988 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 17:57:15.573840    6988 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 17:57:15.574281    6988 kubeadm.go:391] StartCluster: {Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
9.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.91.227 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:57:15.580506    6988 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0416 17:57:15.612292    6988 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0416 17:57:15.627466    6988 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0416 17:57:15.628097    6988 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0416 17:57:15.628097    6988 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0416 17:57:15.635032    6988 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 17:57:15.660479    6988 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 17:57:15.676695    6988 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0416 17:57:15.676792    6988 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0416 17:57:15.676792    6988 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0416 17:57:15.676855    6988 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 17:57:15.676918    6988 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 17:57:15.676973    6988 kubeadm.go:156] found existing configuration files:
	
	I0416 17:57:15.684985    6988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 17:57:15.700012    6988 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 17:57:15.700126    6988 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 17:57:15.708938    6988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 17:57:15.734829    6988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 17:57:15.747861    6988 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 17:57:15.748201    6988 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 17:57:15.756696    6988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 17:57:15.784559    6988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 17:57:15.804131    6988 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 17:57:15.804131    6988 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 17:57:15.815130    6988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 17:57:15.838118    6988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 17:57:15.854130    6988 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 17:57:15.854130    6988 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 17:57:15.862912    6988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 17:57:15.876128    6988 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 17:57:16.053541    6988 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 17:57:16.053541    6988 command_runner.go:130] > [init] Using Kubernetes version: v1.29.3
	I0416 17:57:16.053865    6988 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 17:57:16.053865    6988 command_runner.go:130] > [preflight] Running pre-flight checks
	I0416 17:57:16.200461    6988 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 17:57:16.200461    6988 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 17:57:16.200461    6988 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 17:57:16.200461    6988 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 17:57:16.200461    6988 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 17:57:16.200461    6988 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 17:57:16.451494    6988 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 17:57:16.452473    6988 out.go:204]   - Generating certificates and keys ...
	I0416 17:57:16.451494    6988 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 17:57:16.453479    6988 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 17:57:16.453479    6988 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0416 17:57:16.453479    6988 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0416 17:57:16.453479    6988 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 17:57:16.705308    6988 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0416 17:57:16.705409    6988 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0416 17:57:16.859312    6988 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0416 17:57:16.859312    6988 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0416 17:57:17.049120    6988 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0416 17:57:17.049237    6988 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0416 17:57:17.314616    6988 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0416 17:57:17.314728    6988 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0416 17:57:17.509835    6988 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0416 17:57:17.509835    6988 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0416 17:57:17.510247    6988 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-945500] and IPs [172.19.91.227 127.0.0.1 ::1]
	I0416 17:57:17.510247    6988 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-945500] and IPs [172.19.91.227 127.0.0.1 ::1]
	I0416 17:57:17.791919    6988 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0416 17:57:17.791919    6988 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0416 17:57:17.792356    6988 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-945500] and IPs [172.19.91.227 127.0.0.1 ::1]
	I0416 17:57:17.792356    6988 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-945500] and IPs [172.19.91.227 127.0.0.1 ::1]
	I0416 17:57:17.995022    6988 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0416 17:57:17.995106    6988 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0416 17:57:18.220639    6988 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0416 17:57:18.220729    6988 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0416 17:57:18.582174    6988 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0416 17:57:18.582274    6988 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0416 17:57:18.582480    6988 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 17:57:18.582554    6988 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 17:57:18.743963    6988 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 17:57:18.744564    6988 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 17:57:19.067769    6988 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 17:57:19.068120    6988 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 17:57:19.240331    6988 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 17:57:19.240672    6988 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 17:57:19.461195    6988 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 17:57:19.461195    6988 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 17:57:19.652943    6988 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 17:57:19.653442    6988 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 17:57:19.654516    6988 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 17:57:19.654516    6988 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 17:57:19.660559    6988 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 17:57:19.660559    6988 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 17:57:19.661534    6988 out.go:204]   - Booting up control plane ...
	I0416 17:57:19.661534    6988 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 17:57:19.661534    6988 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 17:57:19.662544    6988 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 17:57:19.662544    6988 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 17:57:19.663540    6988 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 17:57:19.663540    6988 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 17:57:19.684534    6988 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 17:57:19.685153    6988 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 17:57:19.687532    6988 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 17:57:19.687532    6988 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 17:57:19.687532    6988 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 17:57:19.687532    6988 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0416 17:57:19.860703    6988 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 17:57:19.860788    6988 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 17:57:26.366044    6988 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.507200 seconds
	I0416 17:57:26.366044    6988 command_runner.go:130] > [apiclient] All control plane components are healthy after 6.507200 seconds
	I0416 17:57:26.385213    6988 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 17:57:26.385213    6988 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 17:57:26.408456    6988 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 17:57:26.408456    6988 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 17:57:26.942416    6988 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0416 17:57:26.942416    6988 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 17:57:26.943198    6988 kubeadm.go:309] [mark-control-plane] Marking the node multinode-945500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 17:57:26.943369    6988 command_runner.go:130] > [mark-control-plane] Marking the node multinode-945500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 17:57:27.456093    6988 kubeadm.go:309] [bootstrap-token] Using token: v7bkxo.pzxgmh7iiytdovwq
	I0416 17:57:27.456235    6988 command_runner.go:130] > [bootstrap-token] Using token: v7bkxo.pzxgmh7iiytdovwq
	I0416 17:57:27.456953    6988 out.go:204]   - Configuring RBAC rules ...
	I0416 17:57:27.457407    6988 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 17:57:27.457407    6988 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 17:57:27.473244    6988 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 17:57:27.473244    6988 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 17:57:27.485961    6988 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 17:57:27.486019    6988 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 17:57:27.492510    6988 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 17:57:27.492510    6988 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 17:57:27.496129    6988 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 17:57:27.496129    6988 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 17:57:27.501092    6988 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 17:57:27.501753    6988 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 17:57:27.517045    6988 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 17:57:27.517045    6988 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 17:57:27.829288    6988 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 17:57:27.829833    6988 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0416 17:57:27.880030    6988 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 17:57:27.880030    6988 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0416 17:57:27.883021    6988 kubeadm.go:309] 
	I0416 17:57:27.883395    6988 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0416 17:57:27.883467    6988 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 17:57:27.883558    6988 kubeadm.go:309] 
	I0416 17:57:27.883809    6988 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 17:57:27.883809    6988 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0416 17:57:27.883877    6988 kubeadm.go:309] 
	I0416 17:57:27.883877    6988 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 17:57:27.883877    6988 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0416 17:57:27.883877    6988 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 17:57:27.883877    6988 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 17:57:27.883877    6988 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 17:57:27.883877    6988 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 17:57:27.883877    6988 kubeadm.go:309] 
	I0416 17:57:27.883877    6988 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0416 17:57:27.883877    6988 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 17:57:27.883877    6988 kubeadm.go:309] 
	I0416 17:57:27.884765    6988 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 17:57:27.884765    6988 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 17:57:27.884765    6988 kubeadm.go:309] 
	I0416 17:57:27.884765    6988 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 17:57:27.884765    6988 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0416 17:57:27.884765    6988 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 17:57:27.884765    6988 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 17:57:27.884765    6988 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 17:57:27.884765    6988 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 17:57:27.884765    6988 kubeadm.go:309] 
	I0416 17:57:27.884765    6988 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0416 17:57:27.884765    6988 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 17:57:27.885775    6988 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0416 17:57:27.885775    6988 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 17:57:27.885775    6988 kubeadm.go:309] 
	I0416 17:57:27.885775    6988 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token v7bkxo.pzxgmh7iiytdovwq \
	I0416 17:57:27.885775    6988 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token v7bkxo.pzxgmh7iiytdovwq \
	I0416 17:57:27.885775    6988 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c \
	I0416 17:57:27.885775    6988 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c \
	I0416 17:57:27.885775    6988 kubeadm.go:309] 	--control-plane 
	I0416 17:57:27.885775    6988 command_runner.go:130] > 	--control-plane 
	I0416 17:57:27.885775    6988 kubeadm.go:309] 
	I0416 17:57:27.886749    6988 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0416 17:57:27.886749    6988 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 17:57:27.886749    6988 kubeadm.go:309] 
	I0416 17:57:27.886749    6988 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token v7bkxo.pzxgmh7iiytdovwq \
	I0416 17:57:27.886749    6988 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token v7bkxo.pzxgmh7iiytdovwq \
	I0416 17:57:27.886749    6988 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c 
	I0416 17:57:27.886749    6988 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c 
	I0416 17:57:27.886749    6988 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 17:57:27.887747    6988 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 17:57:27.887747    6988 cni.go:84] Creating CNI manager for ""
	I0416 17:57:27.887747    6988 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0416 17:57:27.888782    6988 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0416 17:57:27.898776    6988 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0416 17:57:27.906367    6988 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0416 17:57:27.906367    6988 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0416 17:57:27.906446    6988 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0416 17:57:27.906446    6988 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0416 17:57:27.906446    6988 command_runner.go:130] > Access: 2024-04-16 17:55:43.845708000 +0000
	I0416 17:57:27.906446    6988 command_runner.go:130] > Modify: 2024-04-16 08:43:32.000000000 +0000
	I0416 17:57:27.906446    6988 command_runner.go:130] > Change: 2024-04-16 17:55:34.250000000 +0000
	I0416 17:57:27.906446    6988 command_runner.go:130] >  Birth: -
	I0416 17:57:27.906446    6988 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0416 17:57:27.906446    6988 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0416 17:57:27.988519    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0416 17:57:28.490851    6988 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0416 17:57:28.498847    6988 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0416 17:57:28.511858    6988 command_runner.go:130] > serviceaccount/kindnet created
	I0416 17:57:28.523843    6988 command_runner.go:130] > daemonset.apps/kindnet created
	I0416 17:57:28.526917    6988 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 17:57:28.536843    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:28.538723    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-945500 minikube.k8s.io/updated_at=2024_04_16T17_57_28_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4 minikube.k8s.io/name=multinode-945500 minikube.k8s.io/primary=true
	I0416 17:57:28.553542    6988 command_runner.go:130] > -16
	I0416 17:57:28.553542    6988 ops.go:34] apiserver oom_adj: -16
	I0416 17:57:28.663066    6988 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0416 17:57:28.672472    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:28.703696    6988 command_runner.go:130] > node/multinode-945500 labeled
	I0416 17:57:28.779726    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:29.176642    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:29.310699    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:29.688820    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:29.783095    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:30.180137    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:30.283623    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:30.677902    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:30.770542    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:31.173788    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:31.267177    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:31.681339    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:31.776737    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:32.179098    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:32.275419    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:32.685593    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:32.784034    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:33.184934    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:33.284755    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:33.689894    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:33.786322    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:34.177543    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:34.278089    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:34.688074    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:34.788843    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:35.176613    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:35.278146    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:35.690652    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:35.790109    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:36.185543    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:36.283203    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:36.685087    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:36.787681    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:37.183826    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:37.287103    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:37.686779    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:37.790505    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:38.186663    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:38.313330    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:38.690145    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:38.792194    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:39.188096    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:39.307296    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:39.673049    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:39.777746    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:40.175109    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:40.317376    6988 command_runner.go:130] > NAME      SECRETS   AGE
	I0416 17:57:40.317525    6988 command_runner.go:130] > default   0         0s
	I0416 17:57:40.317525    6988 kubeadm.go:1107] duration metric: took 11.7899387s to wait for elevateKubeSystemPrivileges
	W0416 17:57:40.317725    6988 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 17:57:40.317725    6988 kubeadm.go:393] duration metric: took 24.7420862s to StartCluster
	I0416 17:57:40.317841    6988 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:40.318068    6988 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 17:57:40.320080    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:40.321302    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0416 17:57:40.321470    6988 start.go:234] Will wait 6m0s for node &{Name: IP:172.19.91.227 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 17:57:40.321470    6988 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 17:57:40.321614    6988 addons.go:69] Setting storage-provisioner=true in profile "multinode-945500"
	I0416 17:57:40.321614    6988 addons.go:234] Setting addon storage-provisioner=true in "multinode-945500"
	I0416 17:57:40.321614    6988 host.go:66] Checking if "multinode-945500" exists ...
	I0416 17:57:40.321614    6988 addons.go:69] Setting default-storageclass=true in profile "multinode-945500"
	I0416 17:57:40.321614    6988 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-945500"
	I0416 17:57:40.321614    6988 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 17:57:40.322690    6988 out.go:177] * Verifying Kubernetes components...
	I0416 17:57:40.322606    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:57:40.322690    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:57:40.336146    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:57:40.543940    6988 command_runner.go:130] > apiVersion: v1
	I0416 17:57:40.544012    6988 command_runner.go:130] > data:
	I0416 17:57:40.544012    6988 command_runner.go:130] >   Corefile: |
	I0416 17:57:40.544012    6988 command_runner.go:130] >     .:53 {
	I0416 17:57:40.544012    6988 command_runner.go:130] >         errors
	I0416 17:57:40.544012    6988 command_runner.go:130] >         health {
	I0416 17:57:40.544088    6988 command_runner.go:130] >            lameduck 5s
	I0416 17:57:40.544088    6988 command_runner.go:130] >         }
	I0416 17:57:40.544088    6988 command_runner.go:130] >         ready
	I0416 17:57:40.544112    6988 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0416 17:57:40.544112    6988 command_runner.go:130] >            pods insecure
	I0416 17:57:40.544112    6988 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0416 17:57:40.544112    6988 command_runner.go:130] >            ttl 30
	I0416 17:57:40.544112    6988 command_runner.go:130] >         }
	I0416 17:57:40.544112    6988 command_runner.go:130] >         prometheus :9153
	I0416 17:57:40.544112    6988 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0416 17:57:40.544191    6988 command_runner.go:130] >            max_concurrent 1000
	I0416 17:57:40.544191    6988 command_runner.go:130] >         }
	I0416 17:57:40.544191    6988 command_runner.go:130] >         cache 30
	I0416 17:57:40.544191    6988 command_runner.go:130] >         loop
	I0416 17:57:40.544191    6988 command_runner.go:130] >         reload
	I0416 17:57:40.544191    6988 command_runner.go:130] >         loadbalance
	I0416 17:57:40.544191    6988 command_runner.go:130] >     }
	I0416 17:57:40.544191    6988 command_runner.go:130] > kind: ConfigMap
	I0416 17:57:40.544191    6988 command_runner.go:130] > metadata:
	I0416 17:57:40.544191    6988 command_runner.go:130] >   creationTimestamp: "2024-04-16T17:57:27Z"
	I0416 17:57:40.544191    6988 command_runner.go:130] >   name: coredns
	I0416 17:57:40.544191    6988 command_runner.go:130] >   namespace: kube-system
	I0416 17:57:40.544296    6988 command_runner.go:130] >   resourceVersion: "274"
	I0416 17:57:40.544296    6988 command_runner.go:130] >   uid: 8b9b71a6-9315-41d9-b055-6f10c4c901fd
	I0416 17:57:40.544483    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.80.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0416 17:57:40.652097    6988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 17:57:40.902041    6988 command_runner.go:130] > configmap/coredns replaced
	I0416 17:57:40.905269    6988 start.go:946] {"host.minikube.internal": 172.19.80.1} host record injected into CoreDNS's ConfigMap
	I0416 17:57:40.906408    6988 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 17:57:40.906594    6988 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 17:57:40.907054    6988 kapi.go:59] client config for multinode-945500: &rest.Config{Host:"https://172.19.91.227:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef16c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 17:57:40.907195    6988 kapi.go:59] client config for multinode-945500: &rest.Config{Host:"https://172.19.91.227:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef16c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 17:57:40.908042    6988 cert_rotation.go:137] Starting client certificate rotation controller
	I0416 17:57:40.908659    6988 node_ready.go:35] waiting up to 6m0s for node "multinode-945500" to be "Ready" ...
	I0416 17:57:40.908860    6988 round_trippers.go:463] GET https://172.19.91.227:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0416 17:57:40.908860    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:40.908860    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:40.908860    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:40.908860    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:40.908860    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:40.908955    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:40.908955    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:40.937154    6988 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0416 17:57:40.937516    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:40.937516    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:40.937516    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:40.937516    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:40.937516    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:41 GMT
	I0416 17:57:40.937516    6988 round_trippers.go:580]     Audit-Id: e2e8d91f-cc17-4b2b-a543-43ca22e7c70f
	I0416 17:57:40.937516    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:40.937792    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:40.938405    6988 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I0416 17:57:40.938543    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:40.938543    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:40.938543    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:40.938543    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:40.938543    6988 round_trippers.go:580]     Content-Length: 291
	I0416 17:57:40.938662    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:41 GMT
	I0416 17:57:40.938662    6988 round_trippers.go:580]     Audit-Id: 9f1849c0-96cc-4587-8702-5be0aa8b035b
	I0416 17:57:40.938662    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:40.938662    6988 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"474cfa32-79eb-4bf1-81ff-b938f83eaa0d","resourceVersion":"383","creationTimestamp":"2024-04-16T17:57:27Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0416 17:57:40.939508    6988 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"474cfa32-79eb-4bf1-81ff-b938f83eaa0d","resourceVersion":"383","creationTimestamp":"2024-04-16T17:57:27Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0416 17:57:40.939654    6988 round_trippers.go:463] PUT https://172.19.91.227:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0416 17:57:40.939709    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:40.939709    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:40.939709    6988 round_trippers.go:473]     Content-Type: application/json
	I0416 17:57:40.939709    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:40.954484    6988 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0416 17:57:40.954484    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:40.954484    6988 round_trippers.go:580]     Content-Length: 291
	I0416 17:57:40.954484    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:41 GMT
	I0416 17:57:40.954484    6988 round_trippers.go:580]     Audit-Id: 33fbc171-b87c-4a8b-8b71-fb72b829abb0
	I0416 17:57:40.954484    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:40.954484    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:40.954484    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:40.954484    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:40.954484    6988 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"474cfa32-79eb-4bf1-81ff-b938f83eaa0d","resourceVersion":"385","creationTimestamp":"2024-04-16T17:57:27Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0416 17:57:41.416463    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:41.416653    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:41.416463    6988 round_trippers.go:463] GET https://172.19.91.227:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0416 17:57:41.416653    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:41.416653    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:41.416653    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:41.416739    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:41.416886    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:41.420106    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:41.420495    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:41.420495    6988 round_trippers.go:580]     Audit-Id: 0ef8009e-dcde-4e08-b2eb-b21c97c9713b
	I0416 17:57:41.420495    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:41.420495    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:41.420495    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:41.420495    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:41.420495    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:41 GMT
	I0416 17:57:41.420873    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:41.420873    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:41.420970    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:41.420970    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:41.420970    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:41.420970    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:41.420970    6988 round_trippers.go:580]     Content-Length: 291
	I0416 17:57:41.420970    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:41 GMT
	I0416 17:57:41.420970    6988 round_trippers.go:580]     Audit-Id: 876a0092-4e47-429b-acd8-759d477820ca
	I0416 17:57:41.421083    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:41.421155    6988 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"474cfa32-79eb-4bf1-81ff-b938f83eaa0d","resourceVersion":"395","creationTimestamp":"2024-04-16T17:57:27Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0416 17:57:41.421374    6988 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-945500" context rescaled to 1 replicas
	I0416 17:57:41.920343    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:41.920343    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:41.920343    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:41.920343    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:41.925445    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 17:57:41.925445    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:41.925445    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:42 GMT
	I0416 17:57:41.925445    6988 round_trippers.go:580]     Audit-Id: 7df7d5cd-8d90-47e3-a620-e333515b8855
	I0416 17:57:41.925445    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:41.925445    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:41.925445    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:41.925445    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:41.927690    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:42.389093    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:57:42.389178    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:42.389320    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:57:42.390035    6988 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:57:42.389320    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:42.390775    6988 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 17:57:42.390775    6988 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 17:57:42.390840    6988 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 17:57:42.390906    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:57:42.391435    6988 kapi.go:59] client config for multinode-945500: &rest.Config{Host:"https://172.19.91.227:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef16c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 17:57:42.392060    6988 addons.go:234] Setting addon default-storageclass=true in "multinode-945500"
	I0416 17:57:42.392151    6988 host.go:66] Checking if "multinode-945500" exists ...
	I0416 17:57:42.393041    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:57:42.412561    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:42.412743    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:42.412743    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:42.412743    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:42.419056    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 17:57:42.419366    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:42.419366    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:42.419366    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:42.419366    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:42.419366    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:42.419366    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:42 GMT
	I0416 17:57:42.419366    6988 round_trippers.go:580]     Audit-Id: b3f3bd38-d9b8-462a-9951-d6845f4c1e8b
	I0416 17:57:42.419606    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:42.919136    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:42.919136    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:42.919136    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:42.919136    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:42.922770    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:42.923481    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:42.923481    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:43 GMT
	I0416 17:57:42.923481    6988 round_trippers.go:580]     Audit-Id: 0619e710-cc23-453b-93b8-902006c18fd4
	I0416 17:57:42.923481    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:42.923481    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:42.923481    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:42.923481    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:42.924373    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:42.924671    6988 node_ready.go:53] node "multinode-945500" has status "Ready":"False"
	I0416 17:57:43.422289    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:43.422289    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:43.422289    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:43.422289    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:43.426297    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:43.426759    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:43.426855    6988 round_trippers.go:580]     Audit-Id: 3881c6f2-0168-43dd-afc5-e5828acf3c8d
	I0416 17:57:43.426855    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:43.426855    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:43.426855    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:43.426936    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:43.426936    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:43 GMT
	I0416 17:57:43.427005    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:43.912103    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:43.912103    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:43.912103    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:43.912103    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:43.915707    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:43.916753    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:43.916753    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:44 GMT
	I0416 17:57:43.916753    6988 round_trippers.go:580]     Audit-Id: 5c816ab6-0256-4da7-8677-2eed63915566
	I0416 17:57:43.916782    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:43.916782    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:43.916782    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:43.916782    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:43.917611    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:44.422232    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:44.422232    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:44.422232    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:44.422232    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:44.425983    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:44.426131    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:44.426131    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:44.426131    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:44 GMT
	I0416 17:57:44.426209    6988 round_trippers.go:580]     Audit-Id: 9338168a-3808-4f3d-8a58-744d48096dc5
	I0416 17:57:44.426209    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:44.426209    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:44.426209    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:44.426209    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:44.514747    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:57:44.514747    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:44.515754    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:57:44.517753    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:57:44.517753    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:44.517753    6988 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 17:57:44.517753    6988 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 17:57:44.517753    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:57:44.911211    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:44.911456    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:44.911456    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:44.911456    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:44.915270    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:44.915270    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:44.915270    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:44.915270    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:44.915270    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:44.915270    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:44.915270    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:45 GMT
	I0416 17:57:44.915270    6988 round_trippers.go:580]     Audit-Id: 4c85a024-69e3-42e3-8a96-0b4369f957e4
	I0416 17:57:44.916208    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:45.417189    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:45.417189    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:45.417189    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:45.417189    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:45.424768    6988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0416 17:57:45.424768    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:45.424768    6988 round_trippers.go:580]     Audit-Id: 0310038d-76b3-4992-9ac3-7533f23a7d71
	I0416 17:57:45.424768    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:45.424768    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:45.424768    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:45.424768    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:45.424768    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:45 GMT
	I0416 17:57:45.425371    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:45.425371    6988 node_ready.go:53] node "multinode-945500" has status "Ready":"False"
	I0416 17:57:45.923330    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:45.923330    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:45.923330    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:45.923330    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:45.925920    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:45.925920    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:45.926718    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:45.926718    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:45.926718    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:46 GMT
	I0416 17:57:45.926718    6988 round_trippers.go:580]     Audit-Id: 97c2ee9c-f0ff-43e0-b2a8-48327b90a95f
	I0416 17:57:45.926718    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:45.926718    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:45.927203    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:46.418033    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:46.418033    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:46.418033    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:46.418033    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:46.501786    6988 round_trippers.go:574] Response Status: 200 OK in 83 milliseconds
	I0416 17:57:46.501786    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:46.501786    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:46 GMT
	I0416 17:57:46.501786    6988 round_trippers.go:580]     Audit-Id: 7df6f9f0-10ff-4db8-bfad-3fc7f1364386
	I0416 17:57:46.501786    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:46.501786    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:46.501905    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:46.501905    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:46.503216    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:46.635075    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:57:46.635075    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:46.635935    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:57:46.921581    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:46.921653    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:46.921653    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:46.921720    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:46.924533    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:46.924533    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:46.924758    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:46.924758    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:46.924758    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:46.924758    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:46.924758    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:47 GMT
	I0416 17:57:46.924758    6988 round_trippers.go:580]     Audit-Id: e78831c8-f850-4752-a899-e59b21c78198
	I0416 17:57:46.924832    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:46.982609    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:57:46.982609    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:46.982609    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 17:57:47.140657    6988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 17:57:47.423704    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:47.423704    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:47.423704    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:47.423704    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:47.427881    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:47.428047    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:47.428047    6988 round_trippers.go:580]     Audit-Id: 23292552-c2df-4084-b58f-d36e231163f8
	I0416 17:57:47.428047    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:47.428047    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:47.428047    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:47.428047    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:47.428047    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:47 GMT
	I0416 17:57:47.428436    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:47.428909    6988 node_ready.go:53] node "multinode-945500" has status "Ready":"False"
	I0416 17:57:47.642156    6988 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0416 17:57:47.642156    6988 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0416 17:57:47.642263    6988 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0416 17:57:47.642263    6988 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0416 17:57:47.642263    6988 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0416 17:57:47.642352    6988 command_runner.go:130] > pod/storage-provisioner created
	I0416 17:57:47.915174    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:47.915174    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:47.915174    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:47.915174    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:47.919802    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:47.919802    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:47.919802    6988 round_trippers.go:580]     Audit-Id: 695031a3-c73c-4762-a80a-ead4be6d3a90
	I0416 17:57:47.919802    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:47.919802    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:47.919802    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:47.919802    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:47.919802    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:48 GMT
	I0416 17:57:47.921798    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:48.424055    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:48.424122    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:48.424122    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:48.424122    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:48.427517    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:48.427517    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:48.427517    6988 round_trippers.go:580]     Audit-Id: 7545d9c7-2c95-4fab-863b-976fb672f07e
	I0416 17:57:48.427517    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:48.427517    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:48.427517    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:48.427517    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:48.427517    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:48 GMT
	I0416 17:57:48.428336    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:48.912182    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:48.912285    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:48.912285    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:48.912285    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:48.915718    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:48.915718    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:48.915718    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:48.915718    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:48.915718    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:49 GMT
	I0416 17:57:48.915718    6988 round_trippers.go:580]     Audit-Id: 2263b32c-d20d-46cd-879e-9105b86a7194
	I0416 17:57:48.915718    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:48.915718    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:48.916253    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:49.012275    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:57:49.012444    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:49.012783    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 17:57:49.142232    6988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 17:57:49.275828    6988 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0416 17:57:49.276194    6988 round_trippers.go:463] GET https://172.19.91.227:8443/apis/storage.k8s.io/v1/storageclasses
	I0416 17:57:49.276271    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:49.276271    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:49.276381    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:49.279132    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:49.279132    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:49.279132    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:49.279132    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:49.279132    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:49.279132    6988 round_trippers.go:580]     Content-Length: 1273
	I0416 17:57:49.279132    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:49 GMT
	I0416 17:57:49.279397    6988 round_trippers.go:580]     Audit-Id: b06ff280-6eac-43c1-91fe-e3ebbad21f66
	I0416 17:57:49.279397    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:49.279397    6988 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"standard","uid":"fad243f1-4905-48ae-985d-d89cda0607a0","resourceVersion":"419","creationTimestamp":"2024-04-16T17:57:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-16T17:57:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0416 17:57:49.279545    6988 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"fad243f1-4905-48ae-985d-d89cda0607a0","resourceVersion":"419","creationTimestamp":"2024-04-16T17:57:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-16T17:57:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0416 17:57:49.279545    6988 round_trippers.go:463] PUT https://172.19.91.227:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0416 17:57:49.280079    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:49.280079    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:49.280079    6988 round_trippers.go:473]     Content-Type: application/json
	I0416 17:57:49.280122    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:49.283131    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:49.283131    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:49.283131    6988 round_trippers.go:580]     Audit-Id: 58e327bf-d681-4c51-8630-376535cfdae0
	I0416 17:57:49.283131    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:49.283131    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:49.283131    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:49.283131    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:49.283131    6988 round_trippers.go:580]     Content-Length: 1220
	I0416 17:57:49.283131    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:49 GMT
	I0416 17:57:49.283131    6988 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"fad243f1-4905-48ae-985d-d89cda0607a0","resourceVersion":"419","creationTimestamp":"2024-04-16T17:57:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-16T17:57:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0416 17:57:49.284142    6988 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0416 17:57:49.285110    6988 addons.go:505] duration metric: took 8.9631309s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0416 17:57:49.413824    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:49.413824    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:49.413824    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:49.413824    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:49.420066    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 17:57:49.420066    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:49.420066    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:49 GMT
	I0416 17:57:49.420066    6988 round_trippers.go:580]     Audit-Id: 673fcfb7-e79c-42ba-abaf-e828c3df7a7a
	I0416 17:57:49.420066    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:49.420066    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:49.420066    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:49.420066    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:49.420066    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:49.915557    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:49.915632    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:49.915632    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:49.915632    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:49.920023    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:49.920023    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:49.920023    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:49.920023    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:49.920023    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:49.920023    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:50 GMT
	I0416 17:57:49.920023    6988 round_trippers.go:580]     Audit-Id: cb813c2c-6bb9-41d0-a192-81d5df39cc31
	I0416 17:57:49.920023    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:49.920752    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:49.920881    6988 node_ready.go:53] node "multinode-945500" has status "Ready":"False"
	I0416 17:57:50.414309    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:50.414309    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:50.414309    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:50.414309    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:50.421246    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 17:57:50.421246    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:50.421246    6988 round_trippers.go:580]     Audit-Id: 9a47d54e-a489-4e7c-8e6e-1768c6e24a06
	I0416 17:57:50.421246    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:50.421246    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:50.421246    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:50.421246    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:50.421246    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:50 GMT
	I0416 17:57:50.421586    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:50.422041    6988 node_ready.go:49] node "multinode-945500" has status "Ready":"True"
	I0416 17:57:50.422127    6988 node_ready.go:38] duration metric: took 9.5128501s for node "multinode-945500" to be "Ready" ...
	I0416 17:57:50.422127    6988 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 17:57:50.422288    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods
	I0416 17:57:50.422288    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:50.422288    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:50.422352    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:50.426293    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:50.426293    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:50.426293    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:50 GMT
	I0416 17:57:50.426293    6988 round_trippers.go:580]     Audit-Id: 13196519-ea29-4856-beaa-5c943f886806
	I0416 17:57:50.426293    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:50.426293    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:50.426645    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:50.426645    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:50.427551    6988 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"427","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56336 chars]
	I0416 17:57:50.432315    6988 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-86z7h" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:50.432315    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 17:57:50.432315    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:50.432315    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:50.432315    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:50.435446    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:50.435446    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:50.435446    6988 round_trippers.go:580]     Audit-Id: 0da838d3-4490-46a7-8d52-0929abb29d06
	I0416 17:57:50.435446    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:50.435446    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:50.435446    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:50.435446    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:50.435446    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:50 GMT
	I0416 17:57:50.435667    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"427","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0416 17:57:50.436341    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:50.436417    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:50.436417    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:50.436417    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:50.441670    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 17:57:50.441670    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:50.441670    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:50 GMT
	I0416 17:57:50.441670    6988 round_trippers.go:580]     Audit-Id: 7f63ee25-4ff7-418f-b7b2-b71003d58b29
	I0416 17:57:50.441670    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:50.441670    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:50.441670    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:50.441670    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:50.441670    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:50.933620    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 17:57:50.933620    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:50.933620    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:50.933620    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:50.936638    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:50.936638    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:50.937401    6988 round_trippers.go:580]     Audit-Id: 61428305-720d-4f2d-9189-d4c9892ef7e3
	I0416 17:57:50.937401    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:50.937401    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:50.937401    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:50.937401    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:50.937401    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:51 GMT
	I0416 17:57:50.937680    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"427","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0416 17:57:50.938372    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:50.938438    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:50.938438    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:50.938438    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:50.940646    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:50.940646    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:50.940646    6988 round_trippers.go:580]     Audit-Id: 62d4cd2d-a2dc-447d-8fe8-0ab2e8469374
	I0416 17:57:50.940646    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:50.940646    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:50.940646    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:50.940646    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:50.940646    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:51 GMT
	I0416 17:57:50.941893    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:51.436888    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 17:57:51.436973    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:51.437057    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:51.437057    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:51.440468    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:51.440468    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:51.440468    6988 round_trippers.go:580]     Audit-Id: 854d513c-8ed8-40d2-a6f4-c3ce631c5044
	I0416 17:57:51.440468    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:51.440468    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:51.440468    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:51.440468    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:51.440468    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:51 GMT
	I0416 17:57:51.441473    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"427","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0416 17:57:51.442446    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:51.442513    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:51.442513    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:51.442513    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:51.448074    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 17:57:51.448074    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:51.448074    6988 round_trippers.go:580]     Audit-Id: ea821fd7-5bb9-4fc8-adab-1d7de329d33c
	I0416 17:57:51.448074    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:51.448074    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:51.448074    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:51.448074    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:51.448074    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:51 GMT
	I0416 17:57:51.448761    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:51.936346    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 17:57:51.936438    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:51.936438    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:51.936438    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:51.940774    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:51.940774    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:51.940774    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:51.940774    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:51.940774    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:51.940774    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:51.940774    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:51.940774    6988 round_trippers.go:580]     Audit-Id: 39edef38-eddb-4269-abe8-a908e1d21987
	I0416 17:57:51.941262    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"427","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0416 17:57:51.941999    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:51.942068    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:51.942068    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:51.942068    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:51.944728    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:51.944728    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:51.944728    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:51.944728    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:51.945637    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:51.945637    6988 round_trippers.go:580]     Audit-Id: e9f648f9-92bc-4242-8c2c-17b661038154
	I0416 17:57:51.945637    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:51.945637    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:51.945961    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:52.434152    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 17:57:52.434152    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.434152    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.434152    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.438737    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:52.438737    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.438905    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.438905    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.438905    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.438905    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.438905    6988 round_trippers.go:580]     Audit-Id: 64fc4c09-2c08-4c20-886d-b65cc89badc2
	I0416 17:57:52.438905    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.439311    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"441","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0416 17:57:52.440372    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:52.440372    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.440471    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.440471    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.442800    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:52.442800    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.442800    6988 round_trippers.go:580]     Audit-Id: 69a074dd-0323-4dfd-a4d9-2a31cf93ae57
	I0416 17:57:52.442800    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.442800    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.442800    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.442800    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.442800    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.443974    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:52.444376    6988 pod_ready.go:92] pod "coredns-76f75df574-86z7h" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:52.444463    6988 pod_ready.go:81] duration metric: took 2.0119463s for pod "coredns-76f75df574-86z7h" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.444463    6988 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.444559    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-945500
	I0416 17:57:52.444559    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.444559    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.444559    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.448264    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:52.448675    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.448709    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.448709    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.448709    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.448709    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.448709    6988 round_trippers.go:580]     Audit-Id: 6a1f3697-4191-47e0-93ea-8556479112b5
	I0416 17:57:52.448709    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.448895    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-945500","namespace":"kube-system","uid":"245cef70-3506-471b-9bf6-dd14a2c23d8c","resourceVersion":"372","creationTimestamp":"2024-04-16T17:57:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.91.227:2379","kubernetes.io/config.hash":"c735a32dacf9631b2b4787fe99cff316","kubernetes.io/config.mirror":"c735a32dacf9631b2b4787fe99cff316","kubernetes.io/config.seen":"2024-04-16T17:57:28.101466445Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0416 17:57:52.449544    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:52.449618    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.449618    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.449618    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.457774    6988 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0416 17:57:52.457774    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.457774    6988 round_trippers.go:580]     Audit-Id: 6aa9935f-5cde-4c2d-90c1-770e6d9b42ec
	I0416 17:57:52.457774    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.457774    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.457774    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.457774    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.457774    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.457774    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:52.457774    6988 pod_ready.go:92] pod "etcd-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:52.457774    6988 pod_ready.go:81] duration metric: took 13.3102ms for pod "etcd-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.458783    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.458817    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-945500
	I0416 17:57:52.458817    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.458817    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.458817    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.462379    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:52.462379    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.462379    6988 round_trippers.go:580]     Audit-Id: 3d6fa3f7-ff7f-4322-a2e8-b5a0c4fb1daf
	I0416 17:57:52.462379    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.462379    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.462379    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.462379    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.462379    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.462379    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-945500","namespace":"kube-system","uid":"c6ae969a-de5d-4c7e-af09-b1a5eb21f2ab","resourceVersion":"314","creationTimestamp":"2024-04-16T17:57:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.91.227:8443","kubernetes.io/config.hash":"564fae5a3e5851c815d6092b123a5395","kubernetes.io/config.mirror":"564fae5a3e5851c815d6092b123a5395","kubernetes.io/config.seen":"2024-04-16T17:57:28.101471746Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0416 17:57:52.464244    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:52.464374    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.464374    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.464374    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.466690    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:52.466690    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.466690    6988 round_trippers.go:580]     Audit-Id: d3396616-a825-4d83-94f7-1691134d1559
	I0416 17:57:52.466690    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.466690    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.466690    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.466690    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.466690    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.467128    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:52.467128    6988 pod_ready.go:92] pod "kube-apiserver-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:52.467128    6988 pod_ready.go:81] duration metric: took 8.3444ms for pod "kube-apiserver-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.467128    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.467128    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-945500
	I0416 17:57:52.467655    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.467655    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.467655    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.469965    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:52.469965    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.469965    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.469965    6988 round_trippers.go:580]     Audit-Id: 69b40722-0130-4c39-98a1-4a3e7990d75a
	I0416 17:57:52.469965    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.469965    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.469965    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.469965    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.469965    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-945500","namespace":"kube-system","uid":"01b937c2-9827-4240-83f0-3536fec5eb5e","resourceVersion":"345","creationTimestamp":"2024-04-16T17:57:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5db71de2029227779432bddd337fc81d","kubernetes.io/config.mirror":"5db71de2029227779432bddd337fc81d","kubernetes.io/config.seen":"2024-04-16T17:57:28.101473146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0416 17:57:52.471692    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:52.471736    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.471736    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.471736    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.474312    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:52.474312    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.474312    6988 round_trippers.go:580]     Audit-Id: ef6911fd-c5b9-4c1a-85d8-6d4810547589
	I0416 17:57:52.474312    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.474312    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.474312    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.474312    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.474312    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.474842    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:52.475259    6988 pod_ready.go:92] pod "kube-controller-manager-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:52.475298    6988 pod_ready.go:81] duration metric: took 8.1314ms for pod "kube-controller-manager-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.475298    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rfxsg" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.475372    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfxsg
	I0416 17:57:52.475407    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.475446    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.475446    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.480328    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:52.480328    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.480328    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.480328    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.480328    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.480328    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.480328    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.480328    6988 round_trippers.go:580]     Audit-Id: 5505b192-812e-4b7d-b573-cc48b255735a
	I0416 17:57:52.480328    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rfxsg","generateName":"kube-proxy-","namespace":"kube-system","uid":"b740e6e0-4768-4dd4-a958-307662a92578","resourceVersion":"401","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"83f1bde2-7175-4a0f-944e-61200d7e7177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83f1bde2-7175-4a0f-944e-61200d7e7177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0416 17:57:52.480969    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:52.480969    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.480969    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.480969    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.484123    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:52.484123    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.484123    6988 round_trippers.go:580]     Audit-Id: 242d2743-3177-42b4-9e74-5bce35db3f1d
	I0416 17:57:52.484123    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.484123    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.484123    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.484123    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.484123    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.484955    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:52.485557    6988 pod_ready.go:92] pod "kube-proxy-rfxsg" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:52.485602    6988 pod_ready.go:81] duration metric: took 10.2584ms for pod "kube-proxy-rfxsg" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.485602    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.638123    6988 request.go:629] Waited for 152.4159ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-945500
	I0416 17:57:52.638123    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-945500
	I0416 17:57:52.638123    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.638123    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.638123    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.642880    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:52.642880    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.642880    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.642880    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.642880    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.642880    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.642880    6988 round_trippers.go:580]     Audit-Id: 8f2e930a-7531-48ab-83eb-71103cec3dde
	I0416 17:57:52.642880    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.642880    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-945500","namespace":"kube-system","uid":"a09e52e8-1ac2-4c22-8a3d-57969fae85a9","resourceVersion":"310","creationTimestamp":"2024-04-16T17:57:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4ebc73a23d79d1dece7469fd94c931d1","kubernetes.io/config.mirror":"4ebc73a23d79d1dece7469fd94c931d1","kubernetes.io/config.seen":"2024-04-16T17:57:20.694761708Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0416 17:57:52.840231    6988 request.go:629] Waited for 196.2271ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:52.840540    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:52.840540    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.840640    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.840640    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.845870    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 17:57:52.845870    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.845870    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.845870    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.845870    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.845870    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.845870    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:53 GMT
	I0416 17:57:52.845870    6988 round_trippers.go:580]     Audit-Id: 05acaca5-b7c1-4fab-9ace-d775a055e4f5
	I0416 17:57:52.846425    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:52.846879    6988 pod_ready.go:92] pod "kube-scheduler-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:52.846957    6988 pod_ready.go:81] duration metric: took 361.3343ms for pod "kube-scheduler-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.846957    6988 pod_ready.go:38] duration metric: took 2.4246918s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 17:57:52.846957    6988 api_server.go:52] waiting for apiserver process to appear ...
	I0416 17:57:52.859063    6988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:57:52.885312    6988 command_runner.go:130] > 2058
	I0416 17:57:52.885400    6988 api_server.go:72] duration metric: took 12.562985s to wait for apiserver process to appear ...
	I0416 17:57:52.885400    6988 api_server.go:88] waiting for apiserver healthz status ...
	I0416 17:57:52.885400    6988 api_server.go:253] Checking apiserver healthz at https://172.19.91.227:8443/healthz ...
	I0416 17:57:52.898178    6988 api_server.go:279] https://172.19.91.227:8443/healthz returned 200:
	ok
	I0416 17:57:52.898356    6988 round_trippers.go:463] GET https://172.19.91.227:8443/version
	I0416 17:57:52.898430    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.898430    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.898463    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.900671    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:52.900731    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.900731    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.900731    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.900731    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.900731    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.900731    6988 round_trippers.go:580]     Content-Length: 263
	I0416 17:57:52.900731    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:53 GMT
	I0416 17:57:52.900731    6988 round_trippers.go:580]     Audit-Id: 23327aeb-4415-44a9-ac4c-ac1fb850d1c4
	I0416 17:57:52.900731    6988 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "29",
	  "gitVersion": "v1.29.3",
	  "gitCommit": "6813625b7cd706db5bc7388921be03071e1a492d",
	  "gitTreeState": "clean",
	  "buildDate": "2024-03-14T23:58:36Z",
	  "goVersion": "go1.21.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0416 17:57:52.900731    6988 api_server.go:141] control plane version: v1.29.3
	I0416 17:57:52.900731    6988 api_server.go:131] duration metric: took 15.3302ms to wait for apiserver health ...
	I0416 17:57:52.900731    6988 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 17:57:53.042203    6988 request.go:629] Waited for 141.464ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods
	I0416 17:57:53.042203    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods
	I0416 17:57:53.042203    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:53.042203    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:53.042203    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:53.047811    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:53.047811    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:53.047931    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:53.047931    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:53.047931    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:53.047931    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:53.047931    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:53 GMT
	I0416 17:57:53.047931    6988 round_trippers.go:580]     Audit-Id: 0112d2ef-1059-4960-9329-11966d09c0ed
	I0416 17:57:53.050025    6988 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"445"},"items":[{"metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"441","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56450 chars]
	I0416 17:57:53.056232    6988 system_pods.go:59] 8 kube-system pods found
	I0416 17:57:53.056303    6988 system_pods.go:61] "coredns-76f75df574-86z7h" [1ca004a0-0575-4576-a5ed-ba0891a7d277] Running
	I0416 17:57:53.056303    6988 system_pods.go:61] "etcd-multinode-945500" [245cef70-3506-471b-9bf6-dd14a2c23d8c] Running
	I0416 17:57:53.056303    6988 system_pods.go:61] "kindnet-tp7jl" [91595b62-10ee-47cb-a0c9-2ca83ad70af7] Running
	I0416 17:57:53.056303    6988 system_pods.go:61] "kube-apiserver-multinode-945500" [c6ae969a-de5d-4c7e-af09-b1a5eb21f2ab] Running
	I0416 17:57:53.056303    6988 system_pods.go:61] "kube-controller-manager-multinode-945500" [01b937c2-9827-4240-83f0-3536fec5eb5e] Running
	I0416 17:57:53.056303    6988 system_pods.go:61] "kube-proxy-rfxsg" [b740e6e0-4768-4dd4-a958-307662a92578] Running
	I0416 17:57:53.056303    6988 system_pods.go:61] "kube-scheduler-multinode-945500" [a09e52e8-1ac2-4c22-8a3d-57969fae85a9] Running
	I0416 17:57:53.056303    6988 system_pods.go:61] "storage-provisioner" [3bd5cc95-eef6-473e-b6f9-898568046f1b] Running
	I0416 17:57:53.056378    6988 system_pods.go:74] duration metric: took 155.5639ms to wait for pod list to return data ...
	I0416 17:57:53.056378    6988 default_sa.go:34] waiting for default service account to be created ...
	I0416 17:57:53.242714    6988 request.go:629] Waited for 186.2414ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/namespaces/default/serviceaccounts
	I0416 17:57:53.242956    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/default/serviceaccounts
	I0416 17:57:53.242956    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:53.243091    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:53.243091    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:53.246460    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:53.246460    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:53.246962    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:53.246962    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:53.246962    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:53.246962    6988 round_trippers.go:580]     Content-Length: 261
	I0416 17:57:53.246962    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:53 GMT
	I0416 17:57:53.246962    6988 round_trippers.go:580]     Audit-Id: da3e035a-782e-4d26-b641-e9ec06113208
	I0416 17:57:53.246962    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:53.247049    6988 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"26260d2a-9800-4f2e-87ba-a34049d52e3f","resourceVersion":"332","creationTimestamp":"2024-04-16T17:57:40Z"}}]}
	I0416 17:57:53.247481    6988 default_sa.go:45] found service account: "default"
	I0416 17:57:53.247563    6988 default_sa.go:55] duration metric: took 191.174ms for default service account to be created ...
	I0416 17:57:53.247563    6988 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 17:57:53.445373    6988 request.go:629] Waited for 197.6083ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods
	I0416 17:57:53.445373    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods
	I0416 17:57:53.445373    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:53.445373    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:53.445373    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:53.453613    6988 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0416 17:57:53.453613    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:53.453613    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:53.453613    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:53.453613    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:53.453613    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:53.453613    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:53 GMT
	I0416 17:57:53.453613    6988 round_trippers.go:580]     Audit-Id: a54cbc48-ccbf-4ab0-b75f-121f6c3ab39c
	I0416 17:57:53.454598    6988 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"441","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56450 chars]
	I0416 17:57:53.457215    6988 system_pods.go:86] 8 kube-system pods found
	I0416 17:57:53.457215    6988 system_pods.go:89] "coredns-76f75df574-86z7h" [1ca004a0-0575-4576-a5ed-ba0891a7d277] Running
	I0416 17:57:53.457215    6988 system_pods.go:89] "etcd-multinode-945500" [245cef70-3506-471b-9bf6-dd14a2c23d8c] Running
	I0416 17:57:53.457215    6988 system_pods.go:89] "kindnet-tp7jl" [91595b62-10ee-47cb-a0c9-2ca83ad70af7] Running
	I0416 17:57:53.457215    6988 system_pods.go:89] "kube-apiserver-multinode-945500" [c6ae969a-de5d-4c7e-af09-b1a5eb21f2ab] Running
	I0416 17:57:53.457215    6988 system_pods.go:89] "kube-controller-manager-multinode-945500" [01b937c2-9827-4240-83f0-3536fec5eb5e] Running
	I0416 17:57:53.457215    6988 system_pods.go:89] "kube-proxy-rfxsg" [b740e6e0-4768-4dd4-a958-307662a92578] Running
	I0416 17:57:53.457215    6988 system_pods.go:89] "kube-scheduler-multinode-945500" [a09e52e8-1ac2-4c22-8a3d-57969fae85a9] Running
	I0416 17:57:53.457215    6988 system_pods.go:89] "storage-provisioner" [3bd5cc95-eef6-473e-b6f9-898568046f1b] Running
	I0416 17:57:53.457215    6988 system_pods.go:126] duration metric: took 209.6402ms to wait for k8s-apps to be running ...
	I0416 17:57:53.457215    6988 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 17:57:53.465993    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 17:57:53.490843    6988 system_svc.go:56] duration metric: took 32.799ms WaitForService to wait for kubelet
	I0416 17:57:53.490843    6988 kubeadm.go:576] duration metric: took 13.1684808s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 17:57:53.490945    6988 node_conditions.go:102] verifying NodePressure condition ...
	I0416 17:57:53.646796    6988 request.go:629] Waited for 155.5885ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/nodes
	I0416 17:57:53.647092    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes
	I0416 17:57:53.647092    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:53.647092    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:53.647092    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:53.650750    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:53.650750    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:53.650750    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:53.650750    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:53.650750    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:53 GMT
	I0416 17:57:53.651249    6988 round_trippers.go:580]     Audit-Id: a39fa908-8f98-49bc-a6db-1564faa14911
	I0416 17:57:53.651249    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:53.651249    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:53.651424    6988 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"mana
gedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1" [truncated 4843 chars]
	I0416 17:57:53.651922    6988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 17:57:53.651922    6988 node_conditions.go:123] node cpu capacity is 2
	I0416 17:57:53.651922    6988 node_conditions.go:105] duration metric: took 160.9684ms to run NodePressure ...
	I0416 17:57:53.652035    6988 start.go:240] waiting for startup goroutines ...
	I0416 17:57:53.652035    6988 start.go:245] waiting for cluster config update ...
	I0416 17:57:53.652035    6988 start.go:254] writing updated cluster config ...
	I0416 17:57:53.653564    6988 out.go:177] 
	I0416 17:57:53.669380    6988 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 17:57:53.669380    6988 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 17:57:53.672905    6988 out.go:177] * Starting "multinode-945500-m02" worker node in "multinode-945500" cluster
	I0416 17:57:53.673088    6988 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 17:57:53.673617    6988 cache.go:56] Caching tarball of preloaded images
	I0416 17:57:53.673750    6988 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 17:57:53.673750    6988 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 17:57:53.674279    6988 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 17:57:53.682401    6988 start.go:360] acquireMachinesLock for multinode-945500-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 17:57:53.682401    6988 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-945500-m02"
	I0416 17:57:53.682989    6988 start.go:93] Provisioning new machine with config: &{Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.91.227 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0416 17:57:53.682989    6988 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0416 17:57:53.683581    6988 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 17:57:53.683581    6988 start.go:159] libmachine.API.Create for "multinode-945500" (driver="hyperv")
	I0416 17:57:53.683581    6988 client.go:168] LocalClient.Create starting
	I0416 17:57:53.684171    6988 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0416 17:57:53.684171    6988 main.go:141] libmachine: Decoding PEM data...
	I0416 17:57:53.684730    6988 main.go:141] libmachine: Parsing certificate...
	I0416 17:57:53.684763    6988 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0416 17:57:53.684763    6988 main.go:141] libmachine: Decoding PEM data...
	I0416 17:57:53.684763    6988 main.go:141] libmachine: Parsing certificate...
	I0416 17:57:53.684763    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0416 17:57:55.392368    6988 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0416 17:57:55.392368    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:55.393364    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0416 17:57:56.931487    6988 main.go:141] libmachine: [stdout =====>] : False
	
	I0416 17:57:56.931487    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:56.931487    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 17:57:58.272841    6988 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 17:57:58.273519    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:58.273519    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 17:58:01.537799    6988 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 17:58:01.537799    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:01.539609    6988 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 17:58:01.848885    6988 main.go:141] libmachine: Creating SSH key...
	I0416 17:58:02.010218    6988 main.go:141] libmachine: Creating VM...
	I0416 17:58:02.011217    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 17:58:04.625040    6988 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 17:58:04.625040    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:04.625917    6988 main.go:141] libmachine: Using switch "Default Switch"
	I0416 17:58:04.625917    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 17:58:06.258751    6988 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 17:58:06.258751    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:06.258751    6988 main.go:141] libmachine: Creating VHD
	I0416 17:58:06.258751    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0416 17:58:09.852420    6988 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : C09A8F8B-563A-41CF-AB1F-9B4C422F3FC9
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0416 17:58:09.852568    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:09.852568    6988 main.go:141] libmachine: Writing magic tar header
	I0416 17:58:09.852638    6988 main.go:141] libmachine: Writing SSH key tar header
	I0416 17:58:09.862039    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0416 17:58:12.878751    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:12.878751    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:12.878751    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\disk.vhd' -SizeBytes 20000MB
	I0416 17:58:15.237605    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:15.237605    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:15.237605    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-945500-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0416 17:58:18.410858    6988 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-945500-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0416 17:58:18.411873    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:18.411914    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-945500-m02 -DynamicMemoryEnabled $false
	I0416 17:58:20.486445    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:20.486524    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:20.486600    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-945500-m02 -Count 2
	I0416 17:58:22.474057    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:22.474057    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:22.474057    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-945500-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\boot2docker.iso'
	I0416 17:58:24.877959    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:24.877959    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:24.878134    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-945500-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\disk.vhd'
	I0416 17:58:27.308442    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:27.309253    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:27.309253    6988 main.go:141] libmachine: Starting VM...
	I0416 17:58:27.309346    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-945500-m02
	I0416 17:58:29.937973    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:29.937973    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:29.937973    6988 main.go:141] libmachine: Waiting for host to start...
	I0416 17:58:29.938140    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:58:32.040669    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:58:32.040669    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:32.040763    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:58:34.346849    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:34.346849    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:35.361237    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:58:37.380851    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:58:37.380851    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:37.381523    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:58:39.667097    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:39.667097    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:40.670143    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:58:42.688257    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:58:42.688257    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:42.688328    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:58:44.946196    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:44.946196    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:45.948919    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:58:47.976127    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:58:47.976127    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:47.976535    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:58:50.265300    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:50.265477    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:51.278063    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:58:53.353234    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:58:53.353234    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:53.353542    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:58:55.731097    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:58:55.731585    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:55.731648    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:58:57.706259    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:58:57.706259    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:57.706259    6988 machine.go:94] provisionDockerMachine start ...
	I0416 17:58:57.706337    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:58:59.674406    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:58:59.674406    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:59.675593    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:01.982982    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:01.982982    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:01.989231    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:59:02.000855    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.6 22 <nil> <nil>}
	I0416 17:59:02.000855    6988 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 17:59:02.131967    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 17:59:02.132116    6988 buildroot.go:166] provisioning hostname "multinode-945500-m02"
	I0416 17:59:02.132244    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:04.030355    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:04.031102    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:04.031102    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:06.380424    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:06.380424    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:06.385493    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:59:06.385574    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.6 22 <nil> <nil>}
	I0416 17:59:06.385574    6988 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-945500-m02 && echo "multinode-945500-m02" | sudo tee /etc/hostname
	I0416 17:59:06.536173    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-945500-m02
	
	I0416 17:59:06.536238    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:08.514008    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:08.514084    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:08.514108    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:10.867331    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:10.867331    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:10.872002    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:59:10.872167    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.6 22 <nil> <nil>}
	I0416 17:59:10.872167    6988 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-945500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-945500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-945500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 17:59:11.029689    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 17:59:11.029689    6988 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 17:59:11.029689    6988 buildroot.go:174] setting up certificates
	I0416 17:59:11.029689    6988 provision.go:84] configureAuth start
	I0416 17:59:11.029689    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:13.049800    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:13.050575    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:13.050646    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:15.359589    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:15.359589    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:15.359846    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:17.299020    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:17.299020    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:17.300075    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:19.605590    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:19.605590    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:19.605590    6988 provision.go:143] copyHostCerts
	I0416 17:59:19.605792    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 17:59:19.606057    6988 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 17:59:19.606057    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 17:59:19.606675    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 17:59:19.607815    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 17:59:19.608147    6988 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 17:59:19.608226    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 17:59:19.608494    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 17:59:19.609301    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 17:59:19.609365    6988 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 17:59:19.609365    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 17:59:19.609365    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 17:59:19.610613    6988 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-945500-m02 san=[127.0.0.1 172.19.91.6 localhost minikube multinode-945500-m02]
	I0416 17:59:19.702929    6988 provision.go:177] copyRemoteCerts
	I0416 17:59:19.710522    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 17:59:19.710522    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:21.626659    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:21.626659    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:21.627629    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:23.970899    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:23.970899    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:23.971221    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\id_rsa Username:docker}
	I0416 17:59:24.079459    6988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3686883s)
	I0416 17:59:24.079459    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 17:59:24.080474    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 17:59:24.123694    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 17:59:24.124179    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0416 17:59:24.164830    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 17:59:24.165649    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 17:59:24.208692    6988 provision.go:87] duration metric: took 13.1782183s to configureAuth
	I0416 17:59:24.208692    6988 buildroot.go:189] setting minikube options for container-runtime
	I0416 17:59:24.209067    6988 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 17:59:24.209160    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:26.153425    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:26.153425    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:26.153714    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:28.507518    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:28.507518    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:28.511037    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:59:28.511634    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.6 22 <nil> <nil>}
	I0416 17:59:28.511634    6988 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 17:59:28.639516    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 17:59:28.639516    6988 buildroot.go:70] root file system type: tmpfs
	I0416 17:59:28.639516    6988 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 17:59:28.639516    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:30.530854    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:30.531013    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:30.531013    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:32.826918    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:32.826918    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:32.832383    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:59:32.832984    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.6 22 <nil> <nil>}
	I0416 17:59:32.832984    6988 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.91.227"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 17:59:32.992600    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.91.227
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 17:59:32.992774    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:34.963694    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:34.963694    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:34.963799    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:37.247922    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:37.247922    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:37.252024    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:59:37.252024    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.6 22 <nil> <nil>}
	I0416 17:59:37.252024    6988 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 17:59:39.216273    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 17:59:39.216273    6988 machine.go:97] duration metric: took 41.5076568s to provisionDockerMachine
	I0416 17:59:39.216367    6988 client.go:171] duration metric: took 1m45.5267916s to LocalClient.Create
	I0416 17:59:39.216420    6988 start.go:167] duration metric: took 1m45.5268452s to libmachine.API.Create "multinode-945500"
	I0416 17:59:39.216420    6988 start.go:293] postStartSetup for "multinode-945500-m02" (driver="hyperv")
	I0416 17:59:39.216420    6988 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 17:59:39.225464    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 17:59:39.225464    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:41.131652    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:41.131652    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:41.132015    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:43.445904    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:43.445904    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:43.446473    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\id_rsa Username:docker}
	I0416 17:59:43.549649    6988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3239396s)
	I0416 17:59:43.558710    6988 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 17:59:43.563635    6988 command_runner.go:130] > NAME=Buildroot
	I0416 17:59:43.563635    6988 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0416 17:59:43.563635    6988 command_runner.go:130] > ID=buildroot
	I0416 17:59:43.563635    6988 command_runner.go:130] > VERSION_ID=2023.02.9
	I0416 17:59:43.563635    6988 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0416 17:59:43.563635    6988 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 17:59:43.563635    6988 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 17:59:43.565096    6988 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 17:59:43.566332    6988 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 17:59:43.566332    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 17:59:43.575822    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 17:59:43.593251    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 17:59:43.635050    6988 start.go:296] duration metric: took 4.4183786s for postStartSetup
	I0416 17:59:43.637173    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:45.591586    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:45.591586    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:45.591966    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:47.994749    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:47.994749    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:47.994889    6988 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 17:59:47.996574    6988 start.go:128] duration metric: took 1m54.3070064s to createHost
	I0416 17:59:47.996664    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:49.890109    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:49.890109    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:49.890628    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:52.220872    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:52.220872    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:52.225852    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:59:52.226248    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.6 22 <nil> <nil>}
	I0416 17:59:52.226248    6988 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 17:59:52.368040    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713290392.538512769
	
	I0416 17:59:52.368040    6988 fix.go:216] guest clock: 1713290392.538512769
	I0416 17:59:52.368040    6988 fix.go:229] Guest: 2024-04-16 17:59:52.538512769 +0000 UTC Remote: 2024-04-16 17:59:47.9965749 +0000 UTC m=+309.651339801 (delta=4.541937869s)
	I0416 17:59:52.368159    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:54.442418    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:54.442507    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:54.442581    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:56.760874    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:56.760874    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:56.765985    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:59:56.766627    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.6 22 <nil> <nil>}
	I0416 17:59:56.766627    6988 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713290392
	I0416 17:59:56.909969    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 17:59:52 UTC 2024
	
	I0416 17:59:56.909969    6988 fix.go:236] clock set: Tue Apr 16 17:59:52 UTC 2024
	 (err=<nil>)
	I0416 17:59:56.909969    6988 start.go:83] releasing machines lock for "multinode-945500-m02", held for 2m3.2205685s
	I0416 17:59:56.909969    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:58.843464    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:58.843464    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:58.843546    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:00:01.159738    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 18:00:01.160789    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:00:01.160917    6988 out.go:177] * Found network options:
	I0416 18:00:01.161771    6988 out.go:177]   - NO_PROXY=172.19.91.227
	W0416 18:00:01.162783    6988 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 18:00:01.163550    6988 out.go:177]   - NO_PROXY=172.19.91.227
	W0416 18:00:01.163820    6988 proxy.go:119] fail to check proxy env: Error ip not in block
	W0416 18:00:01.165081    6988 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 18:00:01.167381    6988 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 18:00:01.167483    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:00:01.178390    6988 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0416 18:00:01.178390    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:00:03.244075    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:00:03.244075    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:00:03.244075    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:00:03.244075    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:00:03.244356    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:00:03.244356    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:00:05.758057    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 18:00:05.758057    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:00:05.758057    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\id_rsa Username:docker}
	I0416 18:00:05.784117    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 18:00:05.784117    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:00:05.784117    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\id_rsa Username:docker}
	I0416 18:00:05.960484    6988 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0416 18:00:05.960638    6988 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.7929841s)
	I0416 18:00:05.960638    6988 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0416 18:00:05.960638    6988 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.781976s)
	W0416 18:00:05.960638    6988 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 18:00:05.975053    6988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 18:00:06.012668    6988 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0416 18:00:06.012756    6988 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 18:00:06.012756    6988 start.go:494] detecting cgroup driver to use...
	I0416 18:00:06.012756    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 18:00:06.050850    6988 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0416 18:00:06.061001    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 18:00:06.091844    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 18:00:06.110783    6988 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 18:00:06.118610    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 18:00:06.144577    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 18:00:06.171490    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 18:00:06.198550    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 18:00:06.226893    6988 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 18:00:06.255518    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 18:00:06.285057    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 18:00:06.314136    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 18:00:06.344453    6988 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 18:00:06.362440    6988 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0416 18:00:06.374326    6988 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 18:00:06.400901    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:00:06.587114    6988 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 18:00:06.621553    6988 start.go:494] detecting cgroup driver to use...
	I0416 18:00:06.630654    6988 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 18:00:06.656160    6988 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0416 18:00:06.656235    6988 command_runner.go:130] > [Unit]
	I0416 18:00:06.656235    6988 command_runner.go:130] > Description=Docker Application Container Engine
	I0416 18:00:06.656235    6988 command_runner.go:130] > Documentation=https://docs.docker.com
	I0416 18:00:06.656235    6988 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0416 18:00:06.656235    6988 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0416 18:00:06.656235    6988 command_runner.go:130] > StartLimitBurst=3
	I0416 18:00:06.656235    6988 command_runner.go:130] > StartLimitIntervalSec=60
	I0416 18:00:06.656235    6988 command_runner.go:130] > [Service]
	I0416 18:00:06.656235    6988 command_runner.go:130] > Type=notify
	I0416 18:00:06.656235    6988 command_runner.go:130] > Restart=on-failure
	I0416 18:00:06.656235    6988 command_runner.go:130] > Environment=NO_PROXY=172.19.91.227
	I0416 18:00:06.656235    6988 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0416 18:00:06.656235    6988 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0416 18:00:06.656235    6988 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0416 18:00:06.656235    6988 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0416 18:00:06.656235    6988 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0416 18:00:06.656235    6988 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0416 18:00:06.656235    6988 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0416 18:00:06.656235    6988 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0416 18:00:06.656235    6988 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0416 18:00:06.656235    6988 command_runner.go:130] > ExecStart=
	I0416 18:00:06.656778    6988 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0416 18:00:06.656778    6988 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0416 18:00:06.656820    6988 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0416 18:00:06.656870    6988 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0416 18:00:06.656870    6988 command_runner.go:130] > LimitNOFILE=infinity
	I0416 18:00:06.656911    6988 command_runner.go:130] > LimitNPROC=infinity
	I0416 18:00:06.656911    6988 command_runner.go:130] > LimitCORE=infinity
	I0416 18:00:06.656911    6988 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0416 18:00:06.656911    6988 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0416 18:00:06.656911    6988 command_runner.go:130] > TasksMax=infinity
	I0416 18:00:06.656911    6988 command_runner.go:130] > TimeoutStartSec=0
	I0416 18:00:06.656911    6988 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0416 18:00:06.656911    6988 command_runner.go:130] > Delegate=yes
	I0416 18:00:06.656911    6988 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0416 18:00:06.656911    6988 command_runner.go:130] > KillMode=process
	I0416 18:00:06.656911    6988 command_runner.go:130] > [Install]
	I0416 18:00:06.656911    6988 command_runner.go:130] > WantedBy=multi-user.target
	I0416 18:00:06.666231    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 18:00:06.697894    6988 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 18:00:06.737622    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 18:00:06.771467    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 18:00:06.804240    6988 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 18:00:06.854175    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 18:00:06.875932    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 18:00:06.907847    6988 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0416 18:00:06.916941    6988 ssh_runner.go:195] Run: which cri-dockerd
	I0416 18:00:06.922573    6988 command_runner.go:130] > /usr/bin/cri-dockerd
	I0416 18:00:06.930663    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 18:00:06.948367    6988 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 18:00:06.987048    6988 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 18:00:07.191969    6988 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 18:00:07.382844    6988 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 18:00:07.382971    6988 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 18:00:07.425295    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:00:07.611967    6988 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 18:00:10.072387    6988 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.460242s)
	I0416 18:00:10.082602    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0416 18:00:10.120067    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 18:00:10.155302    6988 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0416 18:00:10.359234    6988 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0416 18:00:10.554817    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:00:10.747932    6988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0416 18:00:10.786544    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 18:00:10.819302    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:00:10.999957    6988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0416 18:00:11.099015    6988 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0416 18:00:11.111636    6988 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0416 18:00:11.122504    6988 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0416 18:00:11.122504    6988 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0416 18:00:11.122504    6988 command_runner.go:130] > Device: 0,22	Inode: 871         Links: 1
	I0416 18:00:11.122504    6988 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0416 18:00:11.122504    6988 command_runner.go:130] > Access: 2024-04-16 18:00:11.194886190 +0000
	I0416 18:00:11.122504    6988 command_runner.go:130] > Modify: 2024-04-16 18:00:11.194886190 +0000
	I0416 18:00:11.122504    6988 command_runner.go:130] > Change: 2024-04-16 18:00:11.200886564 +0000
	I0416 18:00:11.122504    6988 command_runner.go:130] >  Birth: -
	I0416 18:00:11.122504    6988 start.go:562] Will wait 60s for crictl version
	I0416 18:00:11.131362    6988 ssh_runner.go:195] Run: which crictl
	I0416 18:00:11.136657    6988 command_runner.go:130] > /usr/bin/crictl
	I0416 18:00:11.146046    6988 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 18:00:11.199867    6988 command_runner.go:130] > Version:  0.1.0
	I0416 18:00:11.199867    6988 command_runner.go:130] > RuntimeName:  docker
	I0416 18:00:11.199867    6988 command_runner.go:130] > RuntimeVersion:  26.0.1
	I0416 18:00:11.199867    6988 command_runner.go:130] > RuntimeApiVersion:  v1
	I0416 18:00:11.199867    6988 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0416 18:00:11.205859    6988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 18:00:11.237864    6988 command_runner.go:130] > 26.0.1
	I0416 18:00:11.245954    6988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 18:00:11.279233    6988 command_runner.go:130] > 26.0.1
	I0416 18:00:11.280642    6988 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
	I0416 18:00:11.281457    6988 out.go:177]   - env NO_PROXY=172.19.91.227
	I0416 18:00:11.282089    6988 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0416 18:00:11.285919    6988 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0416 18:00:11.285919    6988 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0416 18:00:11.285919    6988 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0416 18:00:11.285919    6988 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:6f:a4 Flags:up|broadcast|multicast|running}
	I0416 18:00:11.289016    6988 ip.go:210] interface addr: fe80::6b96:eca7:5afa:def5/64
	I0416 18:00:11.289092    6988 ip.go:210] interface addr: 172.19.80.1/20
	I0416 18:00:11.297335    6988 ssh_runner.go:195] Run: grep 172.19.80.1	host.minikube.internal$ /etc/hosts
	I0416 18:00:11.303557    6988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 18:00:11.324932    6988 mustload.go:65] Loading cluster: multinode-945500
	I0416 18:00:11.324932    6988 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:00:11.326302    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:00:13.285643    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:00:13.285643    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:00:13.285643    6988 host.go:66] Checking if "multinode-945500" exists ...
	I0416 18:00:13.285961    6988 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500 for IP: 172.19.91.6
	I0416 18:00:13.285961    6988 certs.go:194] generating shared ca certs ...
	I0416 18:00:13.285961    6988 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:00:13.286821    6988 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0416 18:00:13.287059    6988 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0416 18:00:13.287230    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 18:00:13.287572    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0416 18:00:13.287754    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 18:00:13.287938    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 18:00:13.288586    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem (1338 bytes)
	W0416 18:00:13.288985    6988 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324_empty.pem, impossibly tiny 0 bytes
	I0416 18:00:13.289144    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0416 18:00:13.289487    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0416 18:00:13.289775    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0416 18:00:13.290139    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0416 18:00:13.290481    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem (1708 bytes)
	I0416 18:00:13.290481    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /usr/share/ca-certificates/53242.pem
	I0416 18:00:13.291100    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:00:13.291100    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem -> /usr/share/ca-certificates/5324.pem
	I0416 18:00:13.291100    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 18:00:13.340860    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 18:00:13.392323    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 18:00:13.436417    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 18:00:13.477907    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /usr/share/ca-certificates/53242.pem (1708 bytes)
	I0416 18:00:13.525089    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 18:00:13.566780    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem --> /usr/share/ca-certificates/5324.pem (1338 bytes)
	I0416 18:00:13.622111    6988 ssh_runner.go:195] Run: openssl version
	I0416 18:00:13.630969    6988 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0416 18:00:13.644134    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/53242.pem && ln -fs /usr/share/ca-certificates/53242.pem /etc/ssl/certs/53242.pem"
	I0416 18:00:13.673969    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/53242.pem
	I0416 18:00:13.680217    6988 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 16 16:35 /usr/share/ca-certificates/53242.pem
	I0416 18:00:13.680500    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:35 /usr/share/ca-certificates/53242.pem
	I0416 18:00:13.688237    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/53242.pem
	I0416 18:00:13.696922    6988 command_runner.go:130] > 3ec20f2e
	I0416 18:00:13.708831    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/53242.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 18:00:13.733581    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 18:00:13.760217    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:00:13.766741    6988 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 16 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:00:13.767776    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:00:13.776508    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:00:13.784406    6988 command_runner.go:130] > b5213941
	I0416 18:00:13.793775    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 18:00:13.827353    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5324.pem && ln -fs /usr/share/ca-certificates/5324.pem /etc/ssl/certs/5324.pem"
	I0416 18:00:13.855989    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5324.pem
	I0416 18:00:13.863594    6988 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 16 16:35 /usr/share/ca-certificates/5324.pem
	I0416 18:00:13.863671    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:35 /usr/share/ca-certificates/5324.pem
	I0416 18:00:13.872713    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5324.pem
	I0416 18:00:13.881385    6988 command_runner.go:130] > 51391683
	I0416 18:00:13.891867    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5324.pem /etc/ssl/certs/51391683.0"
	I0416 18:00:13.919310    6988 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 18:00:13.925213    6988 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 18:00:13.925213    6988 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 18:00:13.925406    6988 kubeadm.go:928] updating node {m02 172.19.91.6 8443 v1.29.3 docker false true} ...
	I0416 18:00:13.925406    6988 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-945500-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.91.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 18:00:13.933333    6988 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 18:00:13.949475    6988 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	I0416 18:00:13.949595    6988 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0416 18:00:13.961381    6988 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0416 18:00:13.978194    6988 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0416 18:00:13.978194    6988 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256
	I0416 18:00:13.978194    6988 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256
	I0416 18:00:13.978338    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0416 18:00:13.978338    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0416 18:00:13.989548    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 18:00:13.989548    6988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0416 18:00:13.997857    6988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0416 18:00:14.012312    6988 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0416 18:00:14.012312    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0416 18:00:14.012312    6988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0416 18:00:14.012312    6988 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0416 18:00:14.012312    6988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0416 18:00:14.012312    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0416 18:00:14.012312    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0416 18:00:14.024318    6988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0416 18:00:14.111282    6988 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0416 18:00:14.111282    6988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0416 18:00:14.111282    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0416 18:00:15.159706    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0416 18:00:15.176637    6988 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0416 18:00:15.206211    6988 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 18:00:15.245325    6988 ssh_runner.go:195] Run: grep 172.19.91.227	control-plane.minikube.internal$ /etc/hosts
	I0416 18:00:15.251624    6988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.91.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 18:00:15.280749    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:00:15.453073    6988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 18:00:15.479748    6988 host.go:66] Checking if "multinode-945500" exists ...
	I0416 18:00:15.480950    6988 start.go:316] joinCluster: &{Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.91.227 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.91.6 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpi
ration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 18:00:15.481069    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0416 18:00:15.481184    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:00:17.505631    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:00:17.505631    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:00:17.506531    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:00:19.802120    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 18:00:19.802120    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:00:19.802309    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 18:00:19.993353    6988 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token gegaat.x425l3cmfd8uouwr --discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c 
	I0416 18:00:19.993446    6988 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0": (4.5121206s)
	I0416 18:00:19.993446    6988 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.19.91.6 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0416 18:00:19.993532    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gegaat.x425l3cmfd8uouwr --discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-945500-m02"
	I0416 18:00:20.187968    6988 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 18:00:21.976702    6988 command_runner.go:130] > [preflight] Running pre-flight checks
	I0416 18:00:21.976807    6988 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0416 18:00:21.976807    6988 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0416 18:00:21.976877    6988 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 18:00:21.976877    6988 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 18:00:21.976877    6988 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0416 18:00:21.976877    6988 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0416 18:00:21.976877    6988 command_runner.go:130] > This node has joined the cluster:
	I0416 18:00:21.976877    6988 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0416 18:00:21.976946    6988 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0416 18:00:21.976946    6988 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0416 18:00:21.977006    6988 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gegaat.x425l3cmfd8uouwr --discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-945500-m02": (1.9833608s)
	I0416 18:00:21.977121    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0416 18:00:22.175327    6988 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0416 18:00:22.347211    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-945500-m02 minikube.k8s.io/updated_at=2024_04_16T18_00_22_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4 minikube.k8s.io/name=multinode-945500 minikube.k8s.io/primary=false
	I0416 18:00:22.461008    6988 command_runner.go:130] > node/multinode-945500-m02 labeled
	I0416 18:00:22.461089    6988 start.go:318] duration metric: took 6.9798519s to joinCluster
	I0416 18:00:22.461089    6988 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.19.91.6 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0416 18:00:22.462104    6988 out.go:177] * Verifying Kubernetes components...
	I0416 18:00:22.462104    6988 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:00:22.473344    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:00:22.642951    6988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 18:00:22.666251    6988 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 18:00:22.666816    6988 kapi.go:59] client config for multinode-945500: &rest.Config{Host:"https://172.19.91.227:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef16c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 18:00:22.667170    6988 node_ready.go:35] waiting up to 6m0s for node "multinode-945500-m02" to be "Ready" ...
	I0416 18:00:22.667170    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:22.667170    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:22.667170    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:22.667170    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:22.680255    6988 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0416 18:00:22.680255    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:22.680255    6988 round_trippers.go:580]     Audit-Id: 79e76c8e-11df-4387-9f30-9f5f1755a5e0
	I0416 18:00:22.680255    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:22.680255    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:22.680255    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:22.680255    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:22.680255    6988 round_trippers.go:580]     Content-Length: 3925
	I0416 18:00:22.680255    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:22 GMT
	I0416 18:00:22.680255    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"594","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl" [truncated 2901 chars]
	I0416 18:00:23.181369    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:23.181855    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:23.181855    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:23.181855    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:23.186449    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:23.186582    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:23.186582    6988 round_trippers.go:580]     Audit-Id: 4bae6118-587b-4d9b-a922-3970c34bf8ba
	I0416 18:00:23.186582    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:23.186582    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:23.186673    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:23.186717    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:23.186756    6988 round_trippers.go:580]     Content-Length: 3925
	I0416 18:00:23.186756    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:23 GMT
	I0416 18:00:23.186949    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"594","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl" [truncated 2901 chars]
	I0416 18:00:23.677191    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:23.677191    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:23.677317    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:23.677317    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:23.680492    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:23.680492    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:23.680492    6988 round_trippers.go:580]     Audit-Id: a7f57610-9860-47cd-ab38-3f286c67dceb
	I0416 18:00:23.680492    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:23.680492    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:23.680492    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:23.680492    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:23.680492    6988 round_trippers.go:580]     Content-Length: 3925
	I0416 18:00:23.680492    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:23 GMT
	I0416 18:00:23.681055    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"594","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl" [truncated 2901 chars]
	I0416 18:00:24.175480    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:24.175572    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:24.175572    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:24.175572    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:24.179352    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:24.179352    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:24.179352    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:24.179352    6988 round_trippers.go:580]     Content-Length: 3925
	I0416 18:00:24.179352    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:24 GMT
	I0416 18:00:24.179352    6988 round_trippers.go:580]     Audit-Id: aacf48fe-adbc-4413-b29d-2b958ba7f686
	I0416 18:00:24.179352    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:24.179352    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:24.179352    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:24.179613    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"594","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl" [truncated 2901 chars]
	I0416 18:00:24.673856    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:24.673925    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:24.673925    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:24.673925    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:24.676592    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:24.676592    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:24.676592    6988 round_trippers.go:580]     Audit-Id: 000742e0-7f5e-446d-8a61-8bd8bd82aedc
	I0416 18:00:24.676592    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:24.676592    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:24.676592    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:24.676592    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:24.676592    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:24.676592    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:24 GMT
	I0416 18:00:24.677350    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:24.677739    6988 node_ready.go:53] node "multinode-945500-m02" has status "Ready":"False"
	I0416 18:00:25.170259    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:25.170259    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:25.170259    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:25.170259    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:25.173426    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:25.173426    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:25.173426    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:25 GMT
	I0416 18:00:25.173426    6988 round_trippers.go:580]     Audit-Id: f9c1a393-b288-45a4-98d3-52d7af11f587
	I0416 18:00:25.173426    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:25.173426    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:25.173426    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:25.173426    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:25.173426    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:25.173964    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:25.669435    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:25.669435    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:25.669435    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:25.669530    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:25.672183    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:25.672183    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:25.672183    6988 round_trippers.go:580]     Audit-Id: 56bf1cb1-d49e-4031-8ee9-9392bbe1f6c8
	I0416 18:00:25.672183    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:25.672183    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:25.672183    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:25.672183    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:25.672183    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:25.673192    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:25 GMT
	I0416 18:00:25.673265    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:26.181911    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:26.182121    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:26.182121    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:26.182121    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:26.186490    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:26.186490    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:26.186490    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:26.186490    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:26.186490    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:26.186490    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:26.186490    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:26 GMT
	I0416 18:00:26.186580    6988 round_trippers.go:580]     Audit-Id: 88264325-f44e-4d75-8f22-6b8c5c0e9719
	I0416 18:00:26.186580    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:26.186613    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:26.679044    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:26.679044    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:26.679044    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:26.679044    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:26.683356    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:26.683356    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:26.683356    6988 round_trippers.go:580]     Audit-Id: c54e17f7-7d89-4371-9a95-03073ffa0ffb
	I0416 18:00:26.683356    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:26.683356    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:26.683356    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:26.683356    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:26.683356    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:26.683527    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:26 GMT
	I0416 18:00:26.683689    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:26.683980    6988 node_ready.go:53] node "multinode-945500-m02" has status "Ready":"False"
	I0416 18:00:27.180698    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:27.180698    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:27.181090    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:27.181090    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:27.184901    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:27.184901    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:27.184901    6988 round_trippers.go:580]     Audit-Id: b36ab219-082e-454d-8277-5ffcef9ec16b
	I0416 18:00:27.184901    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:27.184901    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:27.184901    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:27.184901    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:27.185540    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:27.185540    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:27 GMT
	I0416 18:00:27.185671    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:27.678872    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:27.678872    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:27.678975    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:27.678975    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:27.682351    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:27.683004    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:27.683054    6988 round_trippers.go:580]     Audit-Id: f599c3f7-7c68-4f15-8953-bfd791eb0198
	I0416 18:00:27.683054    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:27.683054    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:27.683054    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:27.683054    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:27.683054    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:27.683054    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:27 GMT
	I0416 18:00:27.683286    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:28.183860    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:28.183860    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:28.183860    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:28.183860    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:28.186319    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:28.186319    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:28.186319    6988 round_trippers.go:580]     Audit-Id: 872de824-f646-4d43-860c-2165005c98a0
	I0416 18:00:28.186319    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:28.186319    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:28.186319    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:28.186319    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:28.186319    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:28.186319    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:28 GMT
	I0416 18:00:28.187336    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:28.670992    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:28.670992    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:28.670992    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:28.670992    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:28.675123    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:28.675123    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:28.675123    6988 round_trippers.go:580]     Audit-Id: 098493ef-9038-4b08-bf9e-667a6c61491f
	I0416 18:00:28.675123    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:28.675123    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:28.675123    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:28.675123    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:28.675123    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:28.675123    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:28 GMT
	I0416 18:00:28.675123    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:29.174836    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:29.174890    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:29.174945    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:29.174945    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:29.179018    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:29.179018    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:29.179018    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:29 GMT
	I0416 18:00:29.179018    6988 round_trippers.go:580]     Audit-Id: c31ffe7d-9164-4329-85bd-7a52ce9c45ff
	I0416 18:00:29.179018    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:29.179018    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:29.179018    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:29.179018    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:29.179018    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:29.179018    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:29.179706    6988 node_ready.go:53] node "multinode-945500-m02" has status "Ready":"False"
	I0416 18:00:29.677336    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:29.677336    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:29.677336    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:29.677336    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:29.681001    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:29.681227    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:29.681286    6988 round_trippers.go:580]     Audit-Id: 389d232b-c9c8-4769-869a-1c7205097848
	I0416 18:00:29.681330    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:29.681330    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:29.681367    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:29.681367    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:29.681367    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:29.681367    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:29 GMT
	I0416 18:00:29.681367    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:30.179989    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:30.179989    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:30.179989    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:30.179989    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:30.184557    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:30.184557    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:30.184860    6988 round_trippers.go:580]     Audit-Id: 2d0a23fe-1858-420a-8f7d-89a4ab9e2074
	I0416 18:00:30.184860    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:30.184860    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:30.184860    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:30.184860    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:30.184860    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:30.184860    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:30 GMT
	I0416 18:00:30.185147    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:30.678172    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:30.678172    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:30.678172    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:30.678172    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:30.681395    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:30.681395    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:30.681395    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:30.681395    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:30.681395    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:30.681395    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:30.681395    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:30.681395    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:30 GMT
	I0416 18:00:30.682030    6988 round_trippers.go:580]     Audit-Id: d89d2b5b-078b-40e7-a8de-db37ba442614
	I0416 18:00:30.682245    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:31.177211    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:31.177533    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:31.177533    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:31.177533    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:31.252985    6988 round_trippers.go:574] Response Status: 200 OK in 75 milliseconds
	I0416 18:00:31.252985    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:31.252985    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:31.252985    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:31.252985    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:31.252985    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:31.252985    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:31 GMT
	I0416 18:00:31.252985    6988 round_trippers.go:580]     Audit-Id: 874c3508-0079-436c-9ee6-4bfd92a9fb2a
	I0416 18:00:31.253576    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:31.253576    6988 node_ready.go:53] node "multinode-945500-m02" has status "Ready":"False"
	I0416 18:00:31.682017    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:31.682017    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:31.682017    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:31.682017    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:31.684916    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:31.685729    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:31.685729    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:31.685729    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:31 GMT
	I0416 18:00:31.685729    6988 round_trippers.go:580]     Audit-Id: d159045d-d37c-4252-bd61-8c73f50b03f8
	I0416 18:00:31.685830    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:31.685830    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:31.685830    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:31.685830    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:32.173658    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:32.173658    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:32.173658    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:32.173658    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:32.177586    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:32.177586    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:32.177586    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:32.177586    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:32 GMT
	I0416 18:00:32.177586    6988 round_trippers.go:580]     Audit-Id: d53ca0a9-698a-4e2e-92c6-bda133162c76
	I0416 18:00:32.177586    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:32.177586    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:32.177586    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:32.178475    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:32.678024    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:32.678024    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:32.678024    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:32.678024    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:32.682085    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:32.682614    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:32.682614    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:32.682614    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:32 GMT
	I0416 18:00:32.682614    6988 round_trippers.go:580]     Audit-Id: 165d0d28-6574-4108-94db-5907ad039dd6
	I0416 18:00:32.682614    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:32.682614    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:32.682684    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:32.682989    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:33.168664    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:33.168922    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:33.168922    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:33.168922    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:33.172390    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:33.172390    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:33.172390    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:33.172390    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:33.172390    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:33.172390    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:33.172390    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:33 GMT
	I0416 18:00:33.172390    6988 round_trippers.go:580]     Audit-Id: ba696923-3f1a-4e11-8165-651eef11660a
	I0416 18:00:33.173411    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:33.676259    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:33.676259    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:33.676259    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:33.676259    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:33.680629    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:33.680629    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:33.680629    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:33.681219    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:33.681219    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:33.681219    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:33 GMT
	I0416 18:00:33.681219    6988 round_trippers.go:580]     Audit-Id: 7be99938-6273-447f-8367-634cd5f0a4de
	I0416 18:00:33.681219    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:33.681531    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:33.682462    6988 node_ready.go:53] node "multinode-945500-m02" has status "Ready":"False"
	I0416 18:00:34.178701    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:34.178701    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:34.178701    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:34.178701    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:34.181286    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:34.181286    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:34.181286    6988 round_trippers.go:580]     Audit-Id: f6019dfe-ab29-48d8-9d01-ee729ec66029
	I0416 18:00:34.181286    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:34.181286    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:34.181286    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:34.181286    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:34.181286    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:34 GMT
	I0416 18:00:34.181975    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:34.669380    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:34.669668    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:34.669668    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:34.669668    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:34.672465    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:34.672465    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:34.672465    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:34.672465    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:34.672465    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:34.672465    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:34 GMT
	I0416 18:00:34.672465    6988 round_trippers.go:580]     Audit-Id: a8719766-b414-4604-94c0-e20be6a01464
	I0416 18:00:34.672465    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:34.673674    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:35.169393    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:35.169618    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:35.169692    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:35.169692    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:35.174028    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:35.174028    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:35.174209    6988 round_trippers.go:580]     Audit-Id: ea553a57-8167-487c-a417-8cf0ded53743
	I0416 18:00:35.174209    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:35.174209    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:35.174209    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:35.174209    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:35.174209    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:35 GMT
	I0416 18:00:35.174511    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:35.682247    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:35.682650    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:35.682650    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:35.682650    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:35.685938    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:35.685938    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:35.685938    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:35.685938    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:35.685938    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:35.685938    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:35.685938    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:35 GMT
	I0416 18:00:35.685938    6988 round_trippers.go:580]     Audit-Id: 82dc03b1-e6f8-433d-ac2b-277fc69a2b99
	I0416 18:00:35.686923    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:35.687544    6988 node_ready.go:53] node "multinode-945500-m02" has status "Ready":"False"
	I0416 18:00:36.182291    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:36.182393    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:36.182393    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:36.182442    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:36.190024    6988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0416 18:00:36.190024    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:36.190024    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:36.190024    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:36.190024    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:36.190024    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:36.190024    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:36 GMT
	I0416 18:00:36.190024    6988 round_trippers.go:580]     Audit-Id: a48a8529-ba4d-49a4-90a4-d4a77c7c5001
	I0416 18:00:36.190657    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:36.677065    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:36.677162    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:36.677162    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:36.677162    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:36.680646    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:36.680646    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:36.680646    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:36.680646    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:36.680646    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:36 GMT
	I0416 18:00:36.681185    6988 round_trippers.go:580]     Audit-Id: e4e94e54-d688-4263-a0ef-d154f5f4abeb
	I0416 18:00:36.681185    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:36.681185    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:36.681442    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:37.174195    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:37.174195    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:37.174634    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:37.174634    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:37.178029    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:37.178029    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:37.178830    6988 round_trippers.go:580]     Audit-Id: 55aa8476-6f9d-4256-9569-30e89b1a496b
	I0416 18:00:37.178830    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:37.178830    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:37.178830    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:37.178830    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:37.178830    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:37 GMT
	I0416 18:00:37.179087    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:37.673081    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:37.673348    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:37.673425    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:37.673425    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:37.677095    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:37.677095    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:37.677095    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:37.677095    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:37.677193    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:37.677193    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:37 GMT
	I0416 18:00:37.677193    6988 round_trippers.go:580]     Audit-Id: f84a1c1a-51f5-4ca5-aedb-2f21bb70141f
	I0416 18:00:37.677193    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:37.677583    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:38.171025    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:38.171133    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:38.171133    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:38.171133    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:38.174956    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:38.174956    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:38.174956    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:38.174956    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:38.175478    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:38 GMT
	I0416 18:00:38.175478    6988 round_trippers.go:580]     Audit-Id: ad79e752-a790-4167-88de-0fa0a1ce2c7f
	I0416 18:00:38.175478    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:38.175478    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:38.175685    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:38.176345    6988 node_ready.go:53] node "multinode-945500-m02" has status "Ready":"False"
	I0416 18:00:38.682781    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:38.682781    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:38.682781    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:38.682875    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:38.687443    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:38.687443    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:38.687443    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:38 GMT
	I0416 18:00:38.687443    6988 round_trippers.go:580]     Audit-Id: 9f833ee4-3fc1-4823-99f9-056bf39a2137
	I0416 18:00:38.687443    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:38.687443    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:38.687443    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:38.687443    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:38.687880    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:39.181718    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:39.181718    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:39.181718    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:39.181718    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:39.185234    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:39.185234    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:39.185234    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:39 GMT
	I0416 18:00:39.185234    6988 round_trippers.go:580]     Audit-Id: c944df6e-2f72-4b2f-84ed-0ef01d4bf4ad
	I0416 18:00:39.185234    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:39.185234    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:39.185234    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:39.185234    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:39.186227    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:39.679471    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:39.679471    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:39.679471    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:39.679471    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:39.683435    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:39.683435    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:39.683435    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:39.683435    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:39.683435    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:39.683435    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:39 GMT
	I0416 18:00:39.683435    6988 round_trippers.go:580]     Audit-Id: 72ce3907-afe5-4673-a364-1b0ade9a63a2
	I0416 18:00:39.683435    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:39.684439    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:40.179709    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:40.179709    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:40.179709    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:40.179709    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:40.182280    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:40.182280    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:40.182280    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:40.182280    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:40.182280    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:40.182280    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:40 GMT
	I0416 18:00:40.182280    6988 round_trippers.go:580]     Audit-Id: 15242798-963e-4292-8f78-c57c95f730a6
	I0416 18:00:40.182280    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:40.183037    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:40.183378    6988 node_ready.go:53] node "multinode-945500-m02" has status "Ready":"False"
	I0416 18:00:40.679352    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:40.679436    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:40.679436    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:40.679436    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:40.682752    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:40.682752    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:40.682752    6988 round_trippers.go:580]     Audit-Id: e11e0806-566d-477a-bcb8-8829648fc79a
	I0416 18:00:40.682752    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:40.682752    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:40.682752    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:40.682752    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:40.682752    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:40 GMT
	I0416 18:00:40.683363    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:41.181519    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:41.181623    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.181623    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.181623    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.184563    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.184563    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.184563    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.184563    6988 round_trippers.go:580]     Audit-Id: 8c5f2f81-67e0-45b9-81aa-b9f9cb72a322
	I0416 18:00:41.184563    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.185366    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.185366    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.185366    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.185630    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"630","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3268 chars]
	I0416 18:00:41.186155    6988 node_ready.go:49] node "multinode-945500-m02" has status "Ready":"True"
	I0416 18:00:41.186155    6988 node_ready.go:38] duration metric: took 18.5179332s for node "multinode-945500-m02" to be "Ready" ...
	I0416 18:00:41.186235    6988 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 18:00:41.186380    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods
	I0416 18:00:41.186380    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.186380    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.186461    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.190907    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:41.191511    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.191511    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.191511    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.191511    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.191511    6988 round_trippers.go:580]     Audit-Id: 5b40846d-502b-40b4-b4e6-b0c0c199dcda
	I0416 18:00:41.191511    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.191511    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.194735    6988 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"630"},"items":[{"metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"441","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 70406 chars]
	I0416 18:00:41.197721    6988 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-86z7h" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.197721    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 18:00:41.197721    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.197721    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.197721    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.200304    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.201307    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.201307    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.201307    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.201307    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.201307    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.201307    6988 round_trippers.go:580]     Audit-Id: ddd585b2-d4a5-4fc9-9e78-3d162e0d75cf
	I0416 18:00:41.201307    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.201671    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"441","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0416 18:00:41.202254    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 18:00:41.202254    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.202254    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.202254    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.204830    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.204830    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.204830    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.204830    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.204830    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.204830    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.204830    6988 round_trippers.go:580]     Audit-Id: 5615a17f-6d55-4784-b914-b1262342e4ef
	I0416 18:00:41.204830    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.205530    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"452","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4966 chars]
	I0416 18:00:41.206190    6988 pod_ready.go:92] pod "coredns-76f75df574-86z7h" in "kube-system" namespace has status "Ready":"True"
	I0416 18:00:41.206190    6988 pod_ready.go:81] duration metric: took 8.4686ms for pod "coredns-76f75df574-86z7h" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.206190    6988 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.206190    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-945500
	I0416 18:00:41.206190    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.206190    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.206190    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.208799    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.208799    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.208799    6988 round_trippers.go:580]     Audit-Id: ae8a0c71-2dd6-45b7-96d9-80a7e15fec82
	I0416 18:00:41.208799    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.208799    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.208799    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.208799    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.208799    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.209788    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-945500","namespace":"kube-system","uid":"245cef70-3506-471b-9bf6-dd14a2c23d8c","resourceVersion":"372","creationTimestamp":"2024-04-16T17:57:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.91.227:2379","kubernetes.io/config.hash":"c735a32dacf9631b2b4787fe99cff316","kubernetes.io/config.mirror":"c735a32dacf9631b2b4787fe99cff316","kubernetes.io/config.seen":"2024-04-16T17:57:28.101466445Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0416 18:00:41.209825    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 18:00:41.209825    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.209825    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.209825    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.211989    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.211989    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.211989    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.211989    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.211989    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.211989    6988 round_trippers.go:580]     Audit-Id: 0c5d029c-085b-4f7e-a116-d1258a75da93
	I0416 18:00:41.211989    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.211989    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.213223    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"452","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4966 chars]
	I0416 18:00:41.213811    6988 pod_ready.go:92] pod "etcd-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 18:00:41.213811    6988 pod_ready.go:81] duration metric: took 7.62ms for pod "etcd-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.213811    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.213811    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-945500
	I0416 18:00:41.213811    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.213811    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.213811    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.216448    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.216448    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.216448    6988 round_trippers.go:580]     Audit-Id: 6b2d211f-a673-4f75-931c-2de9b00a2806
	I0416 18:00:41.216448    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.216448    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.216448    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.216448    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.216448    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.217191    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-945500","namespace":"kube-system","uid":"c6ae969a-de5d-4c7e-af09-b1a5eb21f2ab","resourceVersion":"314","creationTimestamp":"2024-04-16T17:57:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.91.227:8443","kubernetes.io/config.hash":"564fae5a3e5851c815d6092b123a5395","kubernetes.io/config.mirror":"564fae5a3e5851c815d6092b123a5395","kubernetes.io/config.seen":"2024-04-16T17:57:28.101471746Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0416 18:00:41.217191    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 18:00:41.217778    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.217778    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.217778    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.219971    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.219971    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.219971    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.219971    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.219971    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.219971    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.219971    6988 round_trippers.go:580]     Audit-Id: 97c48e0c-3227-4fdb-bb53-2c5b0a99e16e
	I0416 18:00:41.219971    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.220674    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"452","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4966 chars]
	I0416 18:00:41.220674    6988 pod_ready.go:92] pod "kube-apiserver-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 18:00:41.220674    6988 pod_ready.go:81] duration metric: took 6.8627ms for pod "kube-apiserver-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.220674    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.220674    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-945500
	I0416 18:00:41.221243    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.221243    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.221243    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.223295    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.223295    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.224145    6988 round_trippers.go:580]     Audit-Id: 5ff785c8-f305-4111-b54a-6d01717ce756
	I0416 18:00:41.224182    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.224223    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.224223    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.224223    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.224315    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.224478    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-945500","namespace":"kube-system","uid":"01b937c2-9827-4240-83f0-3536fec5eb5e","resourceVersion":"345","creationTimestamp":"2024-04-16T17:57:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5db71de2029227779432bddd337fc81d","kubernetes.io/config.mirror":"5db71de2029227779432bddd337fc81d","kubernetes.io/config.seen":"2024-04-16T17:57:28.101473146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0416 18:00:41.225131    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 18:00:41.225131    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.225131    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.225131    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.231431    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 18:00:41.231431    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.231431    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.231431    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.231431    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.231431    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.231431    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.231431    6988 round_trippers.go:580]     Audit-Id: d45b4d6a-ea94-4484-87ef-fd18b35ed725
	I0416 18:00:41.231431    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"452","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4966 chars]
	I0416 18:00:41.232071    6988 pod_ready.go:92] pod "kube-controller-manager-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 18:00:41.232071    6988 pod_ready.go:81] duration metric: took 11.3966ms for pod "kube-controller-manager-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.232071    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q5bdr" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.382236    6988 request.go:629] Waited for 150.1565ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q5bdr
	I0416 18:00:41.382407    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q5bdr
	I0416 18:00:41.382407    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.382407    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.382407    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.385083    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.385083    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.385083    6988 round_trippers.go:580]     Audit-Id: b4d8ec79-02a6-45ad-9ecc-b7b22761dffb
	I0416 18:00:41.385083    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.385083    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.385083    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.385083    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.385083    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.385507    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q5bdr","generateName":"kube-proxy-","namespace":"kube-system","uid":"18f90e3f-dd52-44a3-918a-66181a779f58","resourceVersion":"614","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"83f1bde2-7175-4a0f-944e-61200d7e7177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83f1bde2-7175-4a0f-944e-61200d7e7177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5826 chars]
	I0416 18:00:41.585818    6988 request.go:629] Waited for 199.7761ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:41.585818    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:41.585818    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.586164    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.586164    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.590196    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:41.590196    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.590196    6988 round_trippers.go:580]     Audit-Id: 1d479fce-49d7-483b-a6cd-e9bad5ef24c8
	I0416 18:00:41.590196    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.590196    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.590196    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.590196    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.590196    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.590196    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"630","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3268 chars]
	I0416 18:00:41.590835    6988 pod_ready.go:92] pod "kube-proxy-q5bdr" in "kube-system" namespace has status "Ready":"True"
	I0416 18:00:41.590835    6988 pod_ready.go:81] duration metric: took 358.7431ms for pod "kube-proxy-q5bdr" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.590835    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rfxsg" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.787070    6988 request.go:629] Waited for 196.0845ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfxsg
	I0416 18:00:41.787761    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfxsg
	I0416 18:00:41.787761    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.787761    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.787761    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.791225    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:41.791225    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.791225    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.791225    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.791225    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.791225    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.791225    6988 round_trippers.go:580]     Audit-Id: 0948013e-ea2e-4863-bd44-98088c0ba200
	I0416 18:00:41.791225    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.792789    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rfxsg","generateName":"kube-proxy-","namespace":"kube-system","uid":"b740e6e0-4768-4dd4-a958-307662a92578","resourceVersion":"401","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"83f1bde2-7175-4a0f-944e-61200d7e7177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83f1bde2-7175-4a0f-944e-61200d7e7177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0416 18:00:41.990002    6988 request.go:629] Waited for 196.614ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 18:00:41.990240    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 18:00:41.990240    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.990240    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.990240    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.993828    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:41.993828    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.993828    6988 round_trippers.go:580]     Audit-Id: 604aaeac-f05a-47b3-96f5-af81155d3173
	I0416 18:00:41.993828    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.993828    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.993828    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.993828    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.993828    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:42 GMT
	I0416 18:00:41.994260    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"452","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4966 chars]
	I0416 18:00:41.994754    6988 pod_ready.go:92] pod "kube-proxy-rfxsg" in "kube-system" namespace has status "Ready":"True"
	I0416 18:00:41.994817    6988 pod_ready.go:81] duration metric: took 403.9592ms for pod "kube-proxy-rfxsg" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.994817    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:42.191736    6988 request.go:629] Waited for 196.6039ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-945500
	I0416 18:00:42.191828    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-945500
	I0416 18:00:42.191933    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:42.191933    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:42.191933    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:42.194567    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:42.194567    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:42.194567    6988 round_trippers.go:580]     Audit-Id: 6ab76f79-405f-48f9-ad04-90e78aa34737
	I0416 18:00:42.194567    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:42.194567    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:42.194567    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:42.194567    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:42.195203    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:42 GMT
	I0416 18:00:42.195382    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-945500","namespace":"kube-system","uid":"a09e52e8-1ac2-4c22-8a3d-57969fae85a9","resourceVersion":"310","creationTimestamp":"2024-04-16T17:57:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4ebc73a23d79d1dece7469fd94c931d1","kubernetes.io/config.mirror":"4ebc73a23d79d1dece7469fd94c931d1","kubernetes.io/config.seen":"2024-04-16T17:57:20.694761708Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0416 18:00:42.393042    6988 request.go:629] Waited for 196.8309ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 18:00:42.393350    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 18:00:42.393350    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:42.393434    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:42.393434    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:42.396719    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:42.397078    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:42.397078    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:42.397078    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:42.397078    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:42 GMT
	I0416 18:00:42.397078    6988 round_trippers.go:580]     Audit-Id: ff7a49f1-7963-4872-babf-4857b06f6961
	I0416 18:00:42.397078    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:42.397078    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:42.397705    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"452","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4966 chars]
	I0416 18:00:42.397705    6988 pod_ready.go:92] pod "kube-scheduler-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 18:00:42.397705    6988 pod_ready.go:81] duration metric: took 402.8649ms for pod "kube-scheduler-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:42.397705    6988 pod_ready.go:38] duration metric: took 1.2114007s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 18:00:42.398226    6988 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 18:00:42.407057    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 18:00:42.430019    6988 system_svc.go:56] duration metric: took 31.7913ms WaitForService to wait for kubelet
	I0416 18:00:42.430019    6988 kubeadm.go:576] duration metric: took 19.9677952s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 18:00:42.430019    6988 node_conditions.go:102] verifying NodePressure condition ...
	I0416 18:00:42.594801    6988 request.go:629] Waited for 164.4742ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/nodes
	I0416 18:00:42.595048    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes
	I0416 18:00:42.595048    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:42.595156    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:42.595156    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:42.600192    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 18:00:42.600192    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:42.600192    6988 round_trippers.go:580]     Audit-Id: 7201947e-da4a-45b2-acc1-266f83b267ad
	I0416 18:00:42.600296    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:42.600296    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:42.600296    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:42.600296    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:42.600296    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:42 GMT
	I0416 18:00:42.600799    6988 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"633"},"items":[{"metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"452","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"mana
gedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1" [truncated 9279 chars]
	I0416 18:00:42.601645    6988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 18:00:42.601726    6988 node_conditions.go:123] node cpu capacity is 2
	I0416 18:00:42.601726    6988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 18:00:42.601726    6988 node_conditions.go:123] node cpu capacity is 2
	I0416 18:00:42.601726    6988 node_conditions.go:105] duration metric: took 171.6974ms to run NodePressure ...
	I0416 18:00:42.601799    6988 start.go:240] waiting for startup goroutines ...
	I0416 18:00:42.601887    6988 start.go:254] writing updated cluster config ...
	I0416 18:00:42.611423    6988 ssh_runner.go:195] Run: rm -f paused
	I0416 18:00:42.727143    6988 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0416 18:00:42.728491    6988 out.go:177] * Done! kubectl is now configured to use "multinode-945500" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 16 17:57:50 multinode-945500 dockerd[1329]: time="2024-04-16T17:57:50.867992582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 17:57:50 multinode-945500 dockerd[1329]: time="2024-04-16T17:57:50.870314891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 17:57:50 multinode-945500 dockerd[1329]: time="2024-04-16T17:57:50.870559113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 17:57:50 multinode-945500 dockerd[1329]: time="2024-04-16T17:57:50.870771832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 17:57:50 multinode-945500 dockerd[1329]: time="2024-04-16T17:57:50.871771722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 17:57:50 multinode-945500 cri-dockerd[1229]: time="2024-04-16T17:57:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6f233a9704eee6bb1cf687b3da2862d19573fa48783019b66bbec5c674edc5c5/resolv.conf as [nameserver 172.19.80.1]"
	Apr 16 17:57:51 multinode-945500 cri-dockerd[1229]: time="2024-04-16T17:57:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2ba60ece6840a1a429cd8774d8df1b9513d4afe735215afb3f616bcd9615ab76/resolv.conf as [nameserver 172.19.80.1]"
	Apr 16 17:57:51 multinode-945500 dockerd[1329]: time="2024-04-16T17:57:51.142641877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 17:57:51 multinode-945500 dockerd[1329]: time="2024-04-16T17:57:51.144651052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 17:57:51 multinode-945500 dockerd[1329]: time="2024-04-16T17:57:51.144685055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 17:57:51 multinode-945500 dockerd[1329]: time="2024-04-16T17:57:51.144816666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 17:57:51 multinode-945500 dockerd[1329]: time="2024-04-16T17:57:51.272898776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 17:57:51 multinode-945500 dockerd[1329]: time="2024-04-16T17:57:51.272990084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 17:57:51 multinode-945500 dockerd[1329]: time="2024-04-16T17:57:51.273003985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 17:57:51 multinode-945500 dockerd[1329]: time="2024-04-16T17:57:51.274090773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:01:05 multinode-945500 dockerd[1329]: time="2024-04-16T18:01:05.483494643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:01:05 multinode-945500 dockerd[1329]: time="2024-04-16T18:01:05.483635748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:01:05 multinode-945500 dockerd[1329]: time="2024-04-16T18:01:05.483656849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:01:05 multinode-945500 dockerd[1329]: time="2024-04-16T18:01:05.485502118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:01:05 multinode-945500 cri-dockerd[1229]: time="2024-04-16T18:01:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c72a50cfb5bdeb4ceb5279eb60fe15681ce2bc5a0b4d7bd7d08ad490736a87c7/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 16 18:01:06 multinode-945500 cri-dockerd[1229]: time="2024-04-16T18:01:06Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 16 18:01:06 multinode-945500 dockerd[1329]: time="2024-04-16T18:01:06.790007462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:01:06 multinode-945500 dockerd[1329]: time="2024-04-16T18:01:06.790158272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:01:06 multinode-945500 dockerd[1329]: time="2024-04-16T18:01:06.790278279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:01:06 multinode-945500 dockerd[1329]: time="2024-04-16T18:01:06.790482592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1475366123af9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   46 seconds ago      Running             busybox                   0                   c72a50cfb5bde       busybox-7fdf7869d9-jxvx2
	6ad0b1d75a1e3       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   0                   2ba60ece6840a       coredns-76f75df574-86z7h
	2b470472d009f       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   6f233a9704eee       storage-provisioner
	cd37920f1d544       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              4 minutes ago       Running             kindnet-cni               0                   d2cd68d7f406d       kindnet-tp7jl
	f56880607ce1e       a1d263b5dc5b0                                                                                         4 minutes ago       Running             kube-proxy                0                   68766d2b671ff       kube-proxy-rfxsg
	736259e5d03b5       39f995c9f1996                                                                                         4 minutes ago       Running             kube-apiserver            0                   b8699d93388d0       kube-apiserver-multinode-945500
	4a7c8d9808b66       8c390d98f50c0                                                                                         4 minutes ago       Running             kube-scheduler            0                   ecb0ceb1a3fed       kube-scheduler-multinode-945500
	91288754cb0bd       6052a25da3f97                                                                                         4 minutes ago       Running             kube-controller-manager   0                   d28c611e06055       kube-controller-manager-multinode-945500
	0cae708a3787a       3861cfcd7c04c                                                                                         4 minutes ago       Running             etcd                      0                   5f7e5b16341d1       etcd-multinode-945500
	
	
	==> coredns [6ad0b1d75a1e] <==
	[INFO] 10.244.0.3:47642 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000140809s
	[INFO] 10.244.1.2:38063 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000393824s
	[INFO] 10.244.1.2:53430 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000153309s
	[INFO] 10.244.1.2:47690 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181411s
	[INFO] 10.244.1.2:40309 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145609s
	[INFO] 10.244.1.2:60258 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000052603s
	[INFO] 10.244.1.2:43597 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000068204s
	[INFO] 10.244.1.2:53767 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061503s
	[INFO] 10.244.1.2:54777 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056603s
	[INFO] 10.244.0.3:38964 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184311s
	[INFO] 10.244.0.3:53114 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074805s
	[INFO] 10.244.0.3:36074 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062204s
	[INFO] 10.244.0.3:60668 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090906s
	[INFO] 10.244.1.2:54659 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099206s
	[INFO] 10.244.1.2:41929 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080505s
	[INFO] 10.244.1.2:40931 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059704s
	[INFO] 10.244.1.2:48577 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000058804s
	[INFO] 10.244.0.3:33415 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000283317s
	[INFO] 10.244.0.3:52256 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000109407s
	[INFO] 10.244.0.3:34542 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000222014s
	[INFO] 10.244.0.3:59509 - 5 "PTR IN 1.80.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000278017s
	[INFO] 10.244.1.2:34647 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164509s
	[INFO] 10.244.1.2:44123 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000155309s
	[INFO] 10.244.1.2:47985 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000056403s
	[INFO] 10.244.1.2:38781 - 5 "PTR IN 1.80.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000051303s
	
	
	==> describe nodes <==
	Name:               multinode-945500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-945500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=multinode-945500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T17_57_28_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 17:57:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-945500
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 18:01:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 18:01:33 +0000   Tue, 16 Apr 2024 17:57:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 18:01:33 +0000   Tue, 16 Apr 2024 17:57:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 18:01:33 +0000   Tue, 16 Apr 2024 17:57:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 18:01:33 +0000   Tue, 16 Apr 2024 17:57:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.91.227
	  Hostname:    multinode-945500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 e85d34dd6c5848b4a3ec498b43e70cda
	  System UUID:                f07a2411-3a9a-ca4a-afc3-5ddc78eea33d
	  Boot ID:                    271a6251-2183-4573-9d3f-923b343cbbd7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-jxvx2                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         47s
	  kube-system                 coredns-76f75df574-86z7h                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m12s
	  kube-system                 etcd-multinode-945500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m24s
	  kube-system                 kindnet-tp7jl                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m12s
	  kube-system                 kube-apiserver-multinode-945500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  kube-system                 kube-controller-manager-multinode-945500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  kube-system                 kube-proxy-rfxsg                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m12s
	  kube-system                 kube-scheduler-multinode-945500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m10s                  kube-proxy       
	  Normal  Starting                 4m32s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m32s (x8 over 4m32s)  kubelet          Node multinode-945500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m32s (x8 over 4m32s)  kubelet          Node multinode-945500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m32s (x7 over 4m32s)  kubelet          Node multinode-945500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m24s                  kubelet          Node multinode-945500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m24s                  kubelet          Node multinode-945500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m24s                  kubelet          Node multinode-945500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m13s                  node-controller  Node multinode-945500 event: Registered Node multinode-945500 in Controller
	  Normal  NodeReady                4m2s                   kubelet          Node multinode-945500 status is now: NodeReady
	
	
	Name:               multinode-945500-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-945500-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=multinode-945500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_16T18_00_22_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 18:00:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-945500-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 18:01:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 18:01:22 +0000   Tue, 16 Apr 2024 18:00:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 18:01:22 +0000   Tue, 16 Apr 2024 18:00:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 18:01:22 +0000   Tue, 16 Apr 2024 18:00:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 18:01:22 +0000   Tue, 16 Apr 2024 18:00:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.91.6
	  Hostname:    multinode-945500-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6ffb3ffe1886460d8f31c8166436085f
	  System UUID:                cd85b681-7c9d-6842-b820-50fe53a2fe10
	  Boot ID:                    391147f8-cd3e-46f1-9b23-dd3a04f0f3a0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-ns8nx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         47s
	  kube-system                 kindnet-7pg6g               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      91s
	  kube-system                 kube-proxy-q5bdr            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 80s                kube-proxy       
	  Normal  NodeHasSufficientMemory  91s (x2 over 91s)  kubelet          Node multinode-945500-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    91s (x2 over 91s)  kubelet          Node multinode-945500-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     91s (x2 over 91s)  kubelet          Node multinode-945500-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  91s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           88s                node-controller  Node multinode-945500-m02 event: Registered Node multinode-945500-m02 in Controller
	  Normal  NodeReady                71s                kubelet          Node multinode-945500-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr16 17:56] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.180108] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[ +28.712788] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.080808] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.453937] systemd-fstab-generator[983]: Ignoring "noauto" option for root device
	[  +0.161653] systemd-fstab-generator[996]: Ignoring "noauto" option for root device
	[  +0.200737] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	[  +2.669121] systemd-fstab-generator[1182]: Ignoring "noauto" option for root device
	[  +0.171244] systemd-fstab-generator[1194]: Ignoring "noauto" option for root device
	[  +0.164230] systemd-fstab-generator[1206]: Ignoring "noauto" option for root device
	[  +0.237653] systemd-fstab-generator[1221]: Ignoring "noauto" option for root device
	[Apr16 17:57] systemd-fstab-generator[1315]: Ignoring "noauto" option for root device
	[  +0.100359] kauditd_printk_skb: 205 callbacks suppressed
	[  +2.927133] systemd-fstab-generator[1513]: Ignoring "noauto" option for root device
	[  +5.699753] systemd-fstab-generator[1707]: Ignoring "noauto" option for root device
	[  +0.085837] kauditd_printk_skb: 73 callbacks suppressed
	[  +7.760431] systemd-fstab-generator[2107]: Ignoring "noauto" option for root device
	[  +0.135160] kauditd_printk_skb: 62 callbacks suppressed
	[  +7.450297] hrtimer: interrupt took 987259 ns
	[  +5.262610] systemd-fstab-generator[2292]: Ignoring "noauto" option for root device
	[  +0.195654] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.560394] kauditd_printk_skb: 51 callbacks suppressed
	[Apr16 18:01] kauditd_printk_skb: 23 callbacks suppressed
	
	
	==> etcd [0cae708a3787] <==
	{"level":"info","ts":"2024-04-16T17:57:22.024751Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 switched to configuration voters=(16790251013889734582)"}
	{"level":"info","ts":"2024-04-16T17:57:22.037022Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ba3fb579e58fbd76","local-member-id":"e902f456ac8a37b6","added-peer-id":"e902f456ac8a37b6","added-peer-peer-urls":["https://172.19.91.227:2380"]}
	{"level":"info","ts":"2024-04-16T17:57:22.036585Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-16T17:57:22.037467Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"e902f456ac8a37b6","initial-advertise-peer-urls":["https://172.19.91.227:2380"],"listen-peer-urls":["https://172.19.91.227:2380"],"advertise-client-urls":["https://172.19.91.227:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.91.227:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-16T17:57:22.037573Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-16T17:57:22.036608Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.19.91.227:2380"}
	{"level":"info","ts":"2024-04-16T17:57:22.037796Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.19.91.227:2380"}
	{"level":"info","ts":"2024-04-16T17:57:22.485441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-16T17:57:22.485773Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-16T17:57:22.486062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 received MsgPreVoteResp from e902f456ac8a37b6 at term 1"}
	{"level":"info","ts":"2024-04-16T17:57:22.486206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 became candidate at term 2"}
	{"level":"info","ts":"2024-04-16T17:57:22.486317Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 received MsgVoteResp from e902f456ac8a37b6 at term 2"}
	{"level":"info","ts":"2024-04-16T17:57:22.486459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 became leader at term 2"}
	{"level":"info","ts":"2024-04-16T17:57:22.486613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e902f456ac8a37b6 elected leader e902f456ac8a37b6 at term 2"}
	{"level":"info","ts":"2024-04-16T17:57:22.492605Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e902f456ac8a37b6","local-member-attributes":"{Name:multinode-945500 ClientURLs:[https://172.19.91.227:2379]}","request-path":"/0/members/e902f456ac8a37b6/attributes","cluster-id":"ba3fb579e58fbd76","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-16T17:57:22.493027Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T17:57:22.493291Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-16T17:57:22.495438Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-16T17:57:22.493174Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T17:57:22.501637Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-16T17:57:22.494099Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T17:57:22.508993Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.91.227:2379"}
	{"level":"info","ts":"2024-04-16T17:57:22.537458Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ba3fb579e58fbd76","local-member-id":"e902f456ac8a37b6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T17:57:22.537767Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T17:57:22.540447Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 18:01:52 up 6 min,  0 users,  load average: 0.40, 0.34, 0.17
	Linux multinode-945500 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [cd37920f1d54] <==
	I0416 18:00:48.341227       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:00:58.347037       1 main.go:223] Handling node with IPs: map[172.19.91.227:{}]
	I0416 18:00:58.347132       1 main.go:227] handling current node
	I0416 18:00:58.347144       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:00:58.347151       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:01:08.355666       1 main.go:223] Handling node with IPs: map[172.19.91.227:{}]
	I0416 18:01:08.355811       1 main.go:227] handling current node
	I0416 18:01:08.355825       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:01:08.355832       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:01:18.369784       1 main.go:223] Handling node with IPs: map[172.19.91.227:{}]
	I0416 18:01:18.369883       1 main.go:227] handling current node
	I0416 18:01:18.369898       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:01:18.369906       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:01:28.383289       1 main.go:223] Handling node with IPs: map[172.19.91.227:{}]
	I0416 18:01:28.383374       1 main.go:227] handling current node
	I0416 18:01:28.383409       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:01:28.383417       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:01:38.399056       1 main.go:223] Handling node with IPs: map[172.19.91.227:{}]
	I0416 18:01:38.399162       1 main.go:227] handling current node
	I0416 18:01:38.399175       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:01:38.399183       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:01:48.405521       1 main.go:223] Handling node with IPs: map[172.19.91.227:{}]
	I0416 18:01:48.405556       1 main.go:227] handling current node
	I0416 18:01:48.405567       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:01:48.405573       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [736259e5d03b] <==
	I0416 17:57:24.492548       1 shared_informer.go:318] Caches are synced for configmaps
	I0416 17:57:24.493015       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0416 17:57:24.493164       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0416 17:57:24.493567       1 aggregator.go:165] initial CRD sync complete...
	I0416 17:57:24.493754       1 autoregister_controller.go:141] Starting autoregister controller
	I0416 17:57:24.493855       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0416 17:57:24.493948       1 cache.go:39] Caches are synced for autoregister controller
	I0416 17:57:24.498835       1 controller.go:624] quota admission added evaluator for: namespaces
	I0416 17:57:24.572544       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0416 17:57:24.581941       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0416 17:57:25.383934       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0416 17:57:25.391363       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0416 17:57:25.391584       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0416 17:57:26.186472       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0416 17:57:26.241100       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0416 17:57:26.380286       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0416 17:57:26.389156       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.91.227]
	I0416 17:57:26.390446       1 controller.go:624] quota admission added evaluator for: endpoints
	I0416 17:57:26.395894       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0416 17:57:26.463024       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0416 17:57:27.978875       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0416 17:57:27.996061       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0416 17:57:28.010130       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0416 17:57:40.322187       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0416 17:57:40.406944       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [91288754cb0b] <==
	I0416 17:57:41.176487       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="38.505µs"
	I0416 17:57:50.419156       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="87.708µs"
	I0416 17:57:50.439046       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="77.007µs"
	I0416 17:57:52.289724       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="340.797µs"
	I0416 17:57:52.327958       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="8.879815ms"
	I0416 17:57:52.329283       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="77.899µs"
	I0416 17:57:54.522679       1 node_lifecycle_controller.go:1045] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0416 18:00:21.143291       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-945500-m02\" does not exist"
	I0416 18:00:21.160886       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7pg6g"
	I0416 18:00:21.165863       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-q5bdr"
	I0416 18:00:21.190337       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-945500-m02" podCIDRs=["10.244.1.0/24"]
	I0416 18:00:24.552622       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-945500-m02"
	I0416 18:00:24.552697       1 event.go:376] "Event occurred" object="multinode-945500-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-945500-m02 event: Registered Node multinode-945500-m02 in Controller"
	I0416 18:00:41.273225       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-945500-m02"
	I0416 18:01:05.000162       1 event.go:376] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7fdf7869d9 to 2"
	I0416 18:01:05.018037       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-ns8nx"
	I0416 18:01:05.041877       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-jxvx2"
	I0416 18:01:05.061957       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="58.524499ms"
	I0416 18:01:05.079880       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="17.398354ms"
	I0416 18:01:05.080339       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="67.502µs"
	I0416 18:01:05.093042       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="55.802µs"
	I0416 18:01:07.013162       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="7.557663ms"
	I0416 18:01:07.014558       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="1.14747ms"
	I0416 18:01:07.433900       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="7.930386ms"
	I0416 18:01:07.434257       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="37.403µs"
	
	
	==> kube-proxy [f56880607ce1] <==
	I0416 17:57:41.776688       1 server_others.go:72] "Using iptables proxy"
	I0416 17:57:41.792626       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.91.227"]
	I0416 17:57:41.867257       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 17:57:41.867331       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 17:57:41.867350       1 server_others.go:168] "Using iptables Proxier"
	I0416 17:57:41.871330       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 17:57:41.872230       1 server.go:865] "Version info" version="v1.29.3"
	I0416 17:57:41.872370       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 17:57:41.874113       1 config.go:188] "Starting service config controller"
	I0416 17:57:41.874135       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 17:57:41.874160       1 config.go:97] "Starting endpoint slice config controller"
	I0416 17:57:41.874165       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 17:57:41.876871       1 config.go:315] "Starting node config controller"
	I0416 17:57:41.876896       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 17:57:41.974693       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 17:57:41.974749       1 shared_informer.go:318] Caches are synced for service config
	I0416 17:57:41.977426       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [4a7c8d9808b6] <==
	W0416 17:57:25.449324       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0416 17:57:25.449598       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0416 17:57:25.655533       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 17:57:25.656479       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 17:57:25.692827       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0416 17:57:25.693097       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0416 17:57:25.711042       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0416 17:57:25.711136       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0416 17:57:25.720155       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 17:57:25.720353       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 17:57:25.721550       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 17:57:25.721738       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 17:57:25.738855       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 17:57:25.738995       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0416 17:57:25.765058       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0416 17:57:25.765096       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0416 17:57:25.774340       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0416 17:57:25.774569       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0416 17:57:25.791990       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0416 17:57:25.792031       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0416 17:57:25.929298       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 17:57:25.929342       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 17:57:26.119349       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 17:57:26.119818       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0416 17:57:29.235915       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 17:57:50 multinode-945500 kubelet[2114]: I0416 17:57:50.569693    2114 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3bd5cc95-eef6-473e-b6f9-898568046f1b-tmp\") pod \"storage-provisioner\" (UID: \"3bd5cc95-eef6-473e-b6f9-898568046f1b\") " pod="kube-system/storage-provisioner"
	Apr 16 17:57:52 multinode-945500 kubelet[2114]: I0416 17:57:52.305729    2114 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-86z7h" podStartSLOduration=12.305690262 podStartE2EDuration="12.305690262s" podCreationTimestamp="2024-04-16 17:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-16 17:57:52.28915642 +0000 UTC m=+24.349141489" watchObservedRunningTime="2024-04-16 17:57:52.305690262 +0000 UTC m=+24.365675231"
	Apr 16 17:57:52 multinode-945500 kubelet[2114]: I0416 17:57:52.320475    2114 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=5.320436921 podStartE2EDuration="5.320436921s" podCreationTimestamp="2024-04-16 17:57:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-16 17:57:52.306050658 +0000 UTC m=+24.366035627" watchObservedRunningTime="2024-04-16 17:57:52.320436921 +0000 UTC m=+24.380421990"
	Apr 16 17:58:28 multinode-945500 kubelet[2114]: E0416 17:58:28.260190    2114 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:58:28 multinode-945500 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:58:28 multinode-945500 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:58:28 multinode-945500 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:58:28 multinode-945500 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:59:28 multinode-945500 kubelet[2114]: E0416 17:59:28.261521    2114 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:59:28 multinode-945500 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:59:28 multinode-945500 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:59:28 multinode-945500 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:59:28 multinode-945500 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 18:00:28 multinode-945500 kubelet[2114]: E0416 18:00:28.262126    2114 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 18:00:28 multinode-945500 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 18:00:28 multinode-945500 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 18:00:28 multinode-945500 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 18:00:28 multinode-945500 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 18:01:05 multinode-945500 kubelet[2114]: I0416 18:01:05.058816    2114 topology_manager.go:215] "Topology Admit Handler" podUID="61d6d0ec-5716-446c-acd3-845d2a3cd08e" podNamespace="default" podName="busybox-7fdf7869d9-jxvx2"
	Apr 16 18:01:05 multinode-945500 kubelet[2114]: I0416 18:01:05.157885    2114 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv45p\" (UniqueName: \"kubernetes.io/projected/61d6d0ec-5716-446c-acd3-845d2a3cd08e-kube-api-access-sv45p\") pod \"busybox-7fdf7869d9-jxvx2\" (UID: \"61d6d0ec-5716-446c-acd3-845d2a3cd08e\") " pod="default/busybox-7fdf7869d9-jxvx2"
	Apr 16 18:01:28 multinode-945500 kubelet[2114]: E0416 18:01:28.260561    2114 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 18:01:28 multinode-945500 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 18:01:28 multinode-945500 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 18:01:28 multinode-945500 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 18:01:28 multinode-945500 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 18:01:45.307528    7244 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-945500 -n multinode-945500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-945500 -n multinode-945500: (10.8126821s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-945500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (52.50s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (231.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-945500 -v 3 --alsologtostderr
E0416 18:02:30.259536    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-945500 -v 3 --alsologtostderr: exit status 90 (3m20.7363305s)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-945500 as [worker]
	* Starting "multinode-945500-m03" worker node in "multinode-945500" cluster
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 18:02:04.868215    6472 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0416 18:02:04.930010    6472 out.go:291] Setting OutFile to fd 884 ...
	I0416 18:02:04.930753    6472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 18:02:04.930753    6472 out.go:304] Setting ErrFile to fd 1012...
	I0416 18:02:04.930753    6472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 18:02:04.948248    6472 mustload.go:65] Loading cluster: multinode-945500
	I0416 18:02:04.949063    6472 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:02:04.949728    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:02:06.915402    6472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:02:06.915402    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:02:06.915402    6472 host.go:66] Checking if "multinode-945500" exists ...
	I0416 18:02:06.916368    6472 api_server.go:166] Checking apiserver status ...
	I0416 18:02:06.925890    6472 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 18:02:06.925890    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:02:08.928118    6472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:02:08.928935    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:02:08.928935    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:02:11.260608    6472 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 18:02:11.260608    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:02:11.261253    6472 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 18:02:11.380969    6472 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.4547358s)
	I0416 18:02:11.390038    6472 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2058/cgroup
	W0416 18:02:11.409048    6472 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2058/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 18:02:11.420397    6472 ssh_runner.go:195] Run: ls
	I0416 18:02:11.428093    6472 api_server.go:253] Checking apiserver healthz at https://172.19.91.227:8443/healthz ...
	I0416 18:02:11.438063    6472 api_server.go:279] https://172.19.91.227:8443/healthz returned 200:
	ok
	I0416 18:02:11.439558    6472 out.go:177] * Adding node m03 to cluster multinode-945500 as [worker]
	I0416 18:02:11.441641    6472 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:02:11.442234    6472 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 18:02:11.448174    6472 out.go:177] * Starting "multinode-945500-m03" worker node in "multinode-945500" cluster
	I0416 18:02:11.448765    6472 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 18:02:11.448765    6472 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0416 18:02:11.448765    6472 cache.go:56] Caching tarball of preloaded images
	I0416 18:02:11.449446    6472 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 18:02:11.449446    6472 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 18:02:11.449446    6472 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 18:02:11.459650    6472 start.go:360] acquireMachinesLock for multinode-945500-m03: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 18:02:11.460272    6472 start.go:364] duration metric: took 573.2µs to acquireMachinesLock for "multinode-945500-m03"
	I0416 18:02:11.460272    6472 start.go:93] Provisioning new machine with config: &{Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.91.227 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.91.6 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP: Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}
	I0416 18:02:11.460272    6472 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0416 18:02:11.461453    6472 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 18:02:11.461453    6472 start.go:159] libmachine.API.Create for "multinode-945500" (driver="hyperv")
	I0416 18:02:11.461712    6472 client.go:168] LocalClient.Create starting
	I0416 18:02:11.461712    6472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0416 18:02:11.461712    6472 main.go:141] libmachine: Decoding PEM data...
	I0416 18:02:11.462249    6472 main.go:141] libmachine: Parsing certificate...
	I0416 18:02:11.462484    6472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0416 18:02:11.462484    6472 main.go:141] libmachine: Decoding PEM data...
	I0416 18:02:11.462484    6472 main.go:141] libmachine: Parsing certificate...
	I0416 18:02:11.462484    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0416 18:02:13.261994    6472 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0416 18:02:13.261994    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:02:13.263075    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0416 18:02:14.821601    6472 main.go:141] libmachine: [stdout =====>] : False
	
	I0416 18:02:14.821601    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:02:14.821601    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 18:02:16.155560    6472 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 18:02:16.156602    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:02:16.156678    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 18:02:19.492156    6472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 18:02:19.492625    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:02:19.494971    6472 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 18:02:19.798181    6472 main.go:141] libmachine: Creating SSH key...
	I0416 18:02:19.950379    6472 main.go:141] libmachine: Creating VM...
	I0416 18:02:19.950379    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 18:02:22.566086    6472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 18:02:22.566178    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:02:22.566178    6472 main.go:141] libmachine: Using switch "Default Switch"
	I0416 18:02:22.566259    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 18:02:24.117948    6472 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 18:02:24.117948    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:02:24.118245    6472 main.go:141] libmachine: Creating VHD
	I0416 18:02:24.118245    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0416 18:02:27.620605    6472 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m03\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 65D9C96C-0DBF-4D83-BB5D-D53AA2B5FB67
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0416 18:02:27.621514    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:02:27.621577    6472 main.go:141] libmachine: Writing magic tar header
	I0416 18:02:27.621577    6472 main.go:141] libmachine: Writing SSH key tar header
	I0416 18:02:27.629985    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0416 18:02:30.578312    6472 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:02:30.578312    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:02:30.578312    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m03\disk.vhd' -SizeBytes 20000MB
	I0416 18:02:32.897693    6472 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:02:32.897693    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:02:32.898043    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-945500-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0416 18:02:36.012698    6472 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-945500-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0416 18:02:36.012698    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:02:36.012698    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-945500-m03 -DynamicMemoryEnabled $false
	I0416 18:02:37.987473    6472 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:02:37.987473    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:02:37.987473    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-945500-m03 -Count 2
	I0416 18:02:39.958869    6472 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:02:39.958869    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:02:39.959962    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-945500-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m03\boot2docker.iso'
	I0416 18:02:42.269253    6472 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:02:42.269253    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:02:42.269333    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-945500-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m03\disk.vhd'
	I0416 18:02:44.586645    6472 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:02:44.586645    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:02:44.586645    6472 main.go:141] libmachine: Starting VM...
	I0416 18:02:44.587063    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-945500-m03
	I0416 18:02:47.189435    6472 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:02:47.189561    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:02:47.189561    6472 main.go:141] libmachine: Waiting for host to start...
	I0416 18:02:47.189561    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:02:49.271440    6472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:02:49.271440    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:02:49.271676    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:02:51.569079    6472 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:02:51.569079    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:02:52.571403    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:02:54.589940    6472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:02:54.590860    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:02:54.590860    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:02:56.830146    6472 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:02:56.830146    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:02:57.838069    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:02:59.821717    6472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:02:59.821717    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:02:59.821717    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:03:02.149287    6472 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:03:02.149604    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:03:03.151096    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:03:05.178644    6472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:03:05.178644    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:03:05.178644    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:03:07.479297    6472 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:03:07.479297    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:03:08.489522    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:03:10.505666    6472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:03:10.505666    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:03:10.505666    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:03:12.931384    6472 main.go:141] libmachine: [stdout =====>] : 172.19.83.156
	
	I0416 18:03:12.931460    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:03:12.931531    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:03:14.836246    6472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:03:14.836246    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:03:14.836246    6472 machine.go:94] provisionDockerMachine start ...
	I0416 18:03:14.836246    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:03:16.772450    6472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:03:16.772502    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:03:16.772604    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:03:19.043549    6472 main.go:141] libmachine: [stdout =====>] : 172.19.83.156
	
	I0416 18:03:19.043860    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:03:19.048384    6472 main.go:141] libmachine: Using SSH client type: native
	I0416 18:03:19.058460    6472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.83.156 22 <nil> <nil>}
	I0416 18:03:19.058460    6472 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 18:03:19.186836    6472 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 18:03:19.187363    6472 buildroot.go:166] provisioning hostname "multinode-945500-m03"
	I0416 18:03:19.187418    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:03:21.101580    6472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:03:21.101580    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:03:21.102580    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:03:23.372207    6472 main.go:141] libmachine: [stdout =====>] : 172.19.83.156
	
	I0416 18:03:23.372207    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:03:23.378546    6472 main.go:141] libmachine: Using SSH client type: native
	I0416 18:03:23.378546    6472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.83.156 22 <nil> <nil>}
	I0416 18:03:23.378929    6472 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-945500-m03 && echo "multinode-945500-m03" | sudo tee /etc/hostname
	I0416 18:03:23.542781    6472 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-945500-m03
	
	I0416 18:03:23.542896    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:03:25.466106    6472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:03:25.466106    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:03:25.466892    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:03:27.800218    6472 main.go:141] libmachine: [stdout =====>] : 172.19.83.156
	
	I0416 18:03:27.800218    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:03:27.807181    6472 main.go:141] libmachine: Using SSH client type: native
	I0416 18:03:27.807788    6472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.83.156 22 <nil> <nil>}
	I0416 18:03:27.807788    6472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-945500-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-945500-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-945500-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 18:03:27.962192    6472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 18:03:27.962362    6472 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 18:03:27.962362    6472 buildroot.go:174] setting up certificates
	I0416 18:03:27.962453    6472 provision.go:84] configureAuth start
	I0416 18:03:27.962553    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:03:29.903080    6472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:03:29.903482    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:03:29.903578    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:03:32.151972    6472 main.go:141] libmachine: [stdout =====>] : 172.19.83.156
	
	I0416 18:03:32.151972    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:03:32.152100    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:03:34.085784    6472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:03:34.085784    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:03:34.085784    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:03:36.398059    6472 main.go:141] libmachine: [stdout =====>] : 172.19.83.156
	
	I0416 18:03:36.398174    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:03:36.398174    6472 provision.go:143] copyHostCerts
	I0416 18:03:36.398174    6472 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 18:03:36.398174    6472 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 18:03:36.398760    6472 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 18:03:36.399443    6472 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 18:03:36.399443    6472 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 18:03:36.400011    6472 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 18:03:36.400387    6472 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 18:03:36.400387    6472 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 18:03:36.400950    6472 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 18:03:36.401605    6472 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-945500-m03 san=[127.0.0.1 172.19.83.156 localhost minikube multinode-945500-m03]
	I0416 18:03:36.473631    6472 provision.go:177] copyRemoteCerts
	I0416 18:03:36.482623    6472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 18:03:36.482623    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:03:38.440396    6472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:03:38.440396    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:03:38.440396    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:03:40.757508    6472 main.go:141] libmachine: [stdout =====>] : 172.19.83.156
	
	I0416 18:03:40.757508    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:03:40.758628    6472 sshutil.go:53] new ssh client: &{IP:172.19.83.156 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m03\id_rsa Username:docker}
	I0416 18:03:40.864941    6472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.38207s)
	I0416 18:03:40.865805    6472 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 18:03:40.913353    6472 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0416 18:03:40.956603    6472 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 18:03:41.001404    6472 provision.go:87] duration metric: took 13.0382102s to configureAuth
	I0416 18:03:41.001404    6472 buildroot.go:189] setting minikube options for container-runtime
	I0416 18:03:41.002244    6472 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:03:41.002244    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:03:42.987948    6472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:03:42.987948    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:03:42.987948    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:03:45.272439    6472 main.go:141] libmachine: [stdout =====>] : 172.19.83.156
	
	I0416 18:03:45.272439    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:03:45.276901    6472 main.go:141] libmachine: Using SSH client type: native
	I0416 18:03:45.277498    6472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.83.156 22 <nil> <nil>}
	I0416 18:03:45.277498    6472 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 18:03:45.406103    6472 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 18:03:45.406103    6472 buildroot.go:70] root file system type: tmpfs
	I0416 18:03:45.406375    6472 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 18:03:45.406375    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:03:47.382234    6472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:03:47.382702    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:03:47.382808    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:03:49.679110    6472 main.go:141] libmachine: [stdout =====>] : 172.19.83.156
	
	I0416 18:03:49.679348    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:03:49.684343    6472 main.go:141] libmachine: Using SSH client type: native
	I0416 18:03:49.685095    6472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.83.156 22 <nil> <nil>}
	I0416 18:03:49.685095    6472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 18:03:49.842567    6472 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 18:03:49.843317    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:03:51.756600    6472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:03:51.757203    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:03:51.757280    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:03:54.090566    6472 main.go:141] libmachine: [stdout =====>] : 172.19.83.156
	
	I0416 18:03:54.090566    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:03:54.094989    6472 main.go:141] libmachine: Using SSH client type: native
	I0416 18:03:54.095398    6472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.83.156 22 <nil> <nil>}
	I0416 18:03:54.095478    6472 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 18:03:56.054984    6472 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 18:03:56.055057    6472 machine.go:97] duration metric: took 41.2164713s to provisionDockerMachine
	I0416 18:03:56.055057    6472 client.go:171] duration metric: took 1m44.5874074s to LocalClient.Create
	I0416 18:03:56.055123    6472 start.go:167] duration metric: took 1m44.5877329s to libmachine.API.Create "multinode-945500"
	I0416 18:03:56.055190    6472 start.go:293] postStartSetup for "multinode-945500-m03" (driver="hyperv")
	I0416 18:03:56.055190    6472 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 18:03:56.063600    6472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 18:03:56.063600    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:03:57.975553    6472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:03:57.975553    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:03:57.976236    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:04:00.318446    6472 main.go:141] libmachine: [stdout =====>] : 172.19.83.156
	
	I0416 18:04:00.318446    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:04:00.319429    6472 sshutil.go:53] new ssh client: &{IP:172.19.83.156 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m03\id_rsa Username:docker}
	I0416 18:04:00.432282    6472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.368434s)
	I0416 18:04:00.444596    6472 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 18:04:00.450739    6472 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 18:04:00.450839    6472 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 18:04:00.451195    6472 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 18:04:00.451814    6472 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 18:04:00.460146    6472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 18:04:00.477888    6472 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 18:04:00.521356    6472 start.go:296] duration metric: took 4.4659127s for postStartSetup
	I0416 18:04:00.525101    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:04:02.491202    6472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:04:02.491571    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:04:02.491643    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:04:04.774900    6472 main.go:141] libmachine: [stdout =====>] : 172.19.83.156
	
	I0416 18:04:04.774900    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:04:04.774900    6472 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 18:04:04.777037    6472 start.go:128] duration metric: took 1m53.310331s to createHost
	I0416 18:04:04.777037    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:04:06.767270    6472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:04:06.767424    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:04:06.767496    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:04:09.121003    6472 main.go:141] libmachine: [stdout =====>] : 172.19.83.156
	
	I0416 18:04:09.121771    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:04:09.125727    6472 main.go:141] libmachine: Using SSH client type: native
	I0416 18:04:09.126098    6472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.83.156 22 <nil> <nil>}
	I0416 18:04:09.126181    6472 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0416 18:04:09.261505    6472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713290649.427104348
	
	I0416 18:04:09.261505    6472 fix.go:216] guest clock: 1713290649.427104348
	I0416 18:04:09.261505    6472 fix.go:229] Guest: 2024-04-16 18:04:09.427104348 +0000 UTC Remote: 2024-04-16 18:04:04.7770371 +0000 UTC m=+119.988636101 (delta=4.650067248s)
	I0416 18:04:09.261505    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:04:11.260967    6472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:04:11.260967    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:04:11.260967    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:04:13.679330    6472 main.go:141] libmachine: [stdout =====>] : 172.19.83.156
	
	I0416 18:04:13.679330    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:04:13.683405    6472 main.go:141] libmachine: Using SSH client type: native
	I0416 18:04:13.683778    6472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.83.156 22 <nil> <nil>}
	I0416 18:04:13.683889    6472 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713290649
	I0416 18:04:13.830313    6472 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 18:04:09 UTC 2024
	
	I0416 18:04:13.830313    6472 fix.go:236] clock set: Tue Apr 16 18:04:09 UTC 2024
	 (err=<nil>)
	I0416 18:04:13.830313    6472 start.go:83] releasing machines lock for "multinode-945500-m03", held for 2m2.3630928s
	I0416 18:04:13.831148    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:04:15.786044    6472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:04:15.786044    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:04:15.786855    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:04:18.121290    6472 main.go:141] libmachine: [stdout =====>] : 172.19.83.156
	
	I0416 18:04:18.121290    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:04:18.124669    6472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 18:04:18.124747    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:04:18.131598    6472 ssh_runner.go:195] Run: systemctl --version
	I0416 18:04:18.131598    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:04:20.145772    6472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:04:20.145892    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:04:20.146026    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:04:20.147695    6472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:04:20.147695    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:04:20.147695    6472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:04:22.528080    6472 main.go:141] libmachine: [stdout =====>] : 172.19.83.156
	
	I0416 18:04:22.528177    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:04:22.528177    6472 sshutil.go:53] new ssh client: &{IP:172.19.83.156 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m03\id_rsa Username:docker}
	I0416 18:04:22.549870    6472 main.go:141] libmachine: [stdout =====>] : 172.19.83.156
	
	I0416 18:04:22.549870    6472 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:04:22.551211    6472 sshutil.go:53] new ssh client: &{IP:172.19.83.156 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m03\id_rsa Username:docker}
	I0416 18:04:22.726857    6472 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6019266s)
	I0416 18:04:22.726999    6472 ssh_runner.go:235] Completed: systemctl --version: (4.5951396s)
	I0416 18:04:22.737998    6472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 18:04:22.746497    6472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 18:04:22.754362    6472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 18:04:22.782711    6472 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 18:04:22.782782    6472 start.go:494] detecting cgroup driver to use...
	I0416 18:04:22.782782    6472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 18:04:22.826742    6472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 18:04:22.853448    6472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 18:04:22.871195    6472 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 18:04:22.879695    6472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 18:04:22.910890    6472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 18:04:22.938389    6472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 18:04:22.971586    6472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 18:04:23.005955    6472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 18:04:23.041824    6472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 18:04:23.067685    6472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 18:04:23.094319    6472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 18:04:23.120593    6472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 18:04:23.144560    6472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 18:04:23.167784    6472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:04:23.334673    6472 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 18:04:23.379441    6472 start.go:494] detecting cgroup driver to use...
	I0416 18:04:23.388787    6472 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 18:04:23.419198    6472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 18:04:23.445640    6472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 18:04:23.489500    6472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 18:04:23.527700    6472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 18:04:23.559693    6472 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 18:04:23.612194    6472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 18:04:23.632816    6472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 18:04:23.681658    6472 ssh_runner.go:195] Run: which cri-dockerd
	I0416 18:04:23.695642    6472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 18:04:23.710493    6472 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 18:04:23.752062    6472 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 18:04:23.926474    6472 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 18:04:24.092315    6472 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 18:04:24.092315    6472 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 18:04:24.130775    6472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:04:24.309049    6472 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 18:05:25.430468    6472 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1179541s)
	I0416 18:05:25.440125    6472 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0416 18:05:25.469283    6472 out.go:177] 
	W0416 18:05:25.470006    6472 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 16 18:03:54 multinode-945500-m03 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 18:03:54 multinode-945500-m03 dockerd[663]: time="2024-04-16T18:03:54.841991155Z" level=info msg="Starting up"
	Apr 16 18:03:54 multinode-945500-m03 dockerd[663]: time="2024-04-16T18:03:54.843003837Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 18:03:54 multinode-945500-m03 dockerd[663]: time="2024-04-16T18:03:54.844087538Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=670
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.872649576Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.894150452Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.894179760Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.894232975Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.894245778Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.894306895Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.894387918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.894569068Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.894656292Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.894671997Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.894682400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.894756620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.895093614Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.897927702Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.898021428Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.898138560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.898220283Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.898313209Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.898430741Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.898506562Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.908468131Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.908530748Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.908550254Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.908566858Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.908581863Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.908826631Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.909313266Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.909599345Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.909694272Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.909713677Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.909735183Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.909750287Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.909763391Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.909825608Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.909897228Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.909913433Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.909927137Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.909939640Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.909960346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910122791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910140396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910154900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910168003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910182007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910195111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910208915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910222619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910238523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910250626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910263430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910276934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910307942Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910330649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910430576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910442980Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910492594Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910508398Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910523402Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910538807Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910638534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910652338Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910666342Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.911123169Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.911319924Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.911393244Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.911423552Z" level=info msg="containerd successfully booted in 0.041727s"
	Apr 16 18:03:55 multinode-945500-m03 dockerd[663]: time="2024-04-16T18:03:55.896343832Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 18:03:55 multinode-945500-m03 dockerd[663]: time="2024-04-16T18:03:55.922019708Z" level=info msg="Loading containers: start."
	Apr 16 18:03:56 multinode-945500-m03 dockerd[663]: time="2024-04-16T18:03:56.139515399Z" level=info msg="Loading containers: done."
	Apr 16 18:03:56 multinode-945500-m03 dockerd[663]: time="2024-04-16T18:03:56.155587843Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 18:03:56 multinode-945500-m03 dockerd[663]: time="2024-04-16T18:03:56.155848309Z" level=info msg="Daemon has completed initialization"
	Apr 16 18:03:56 multinode-945500-m03 systemd[1]: Started Docker Application Container Engine.
	Apr 16 18:03:56 multinode-945500-m03 dockerd[663]: time="2024-04-16T18:03:56.225017912Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 18:03:56 multinode-945500-m03 dockerd[663]: time="2024-04-16T18:03:56.225233366Z" level=info msg="API listen on [::]:2376"
	Apr 16 18:04:24 multinode-945500-m03 dockerd[663]: time="2024-04-16T18:04:24.498055828Z" level=info msg="Processing signal 'terminated'"
	Apr 16 18:04:24 multinode-945500-m03 dockerd[663]: time="2024-04-16T18:04:24.499668415Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 18:04:24 multinode-945500-m03 dockerd[663]: time="2024-04-16T18:04:24.500172843Z" level=info msg="Daemon shutdown complete"
	Apr 16 18:04:24 multinode-945500-m03 dockerd[663]: time="2024-04-16T18:04:24.500223746Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 18:04:24 multinode-945500-m03 dockerd[663]: time="2024-04-16T18:04:24.500236646Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 18:04:24 multinode-945500-m03 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 18:04:25 multinode-945500-m03 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 18:04:25 multinode-945500-m03 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 18:04:25 multinode-945500-m03 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 18:04:25 multinode-945500-m03 dockerd[1017]: time="2024-04-16T18:04:25.572271610Z" level=info msg="Starting up"
	Apr 16 18:05:25 multinode-945500-m03 dockerd[1017]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 16 18:05:25 multinode-945500-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 16 18:05:25 multinode-945500-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 16 18:05:25 multinode-945500-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 16 18:03:54 multinode-945500-m03 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 18:03:54 multinode-945500-m03 dockerd[663]: time="2024-04-16T18:03:54.841991155Z" level=info msg="Starting up"
	Apr 16 18:03:54 multinode-945500-m03 dockerd[663]: time="2024-04-16T18:03:54.843003837Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 18:03:54 multinode-945500-m03 dockerd[663]: time="2024-04-16T18:03:54.844087538Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=670
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.872649576Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.894150452Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.894179760Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.894232975Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.894245778Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.894306895Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.894387918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.894569068Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.894656292Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.894671997Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.894682400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.894756620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.895093614Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.897927702Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.898021428Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.898138560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.898220283Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.898313209Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.898430741Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.898506562Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.908468131Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.908530748Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.908550254Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.908566858Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.908581863Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.908826631Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.909313266Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.909599345Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.909694272Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.909713677Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.909735183Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.909750287Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.909763391Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.909825608Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.909897228Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.909913433Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.909927137Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.909939640Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.909960346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910122791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910140396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910154900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910168003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910182007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910195111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910208915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910222619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910238523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910250626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910263430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910276934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910307942Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910330649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910430576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910442980Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910492594Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910508398Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910523402Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910538807Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910638534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910652338Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.910666342Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.911123169Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.911319924Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.911393244Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 18:03:54 multinode-945500-m03 dockerd[670]: time="2024-04-16T18:03:54.911423552Z" level=info msg="containerd successfully booted in 0.041727s"
	Apr 16 18:03:55 multinode-945500-m03 dockerd[663]: time="2024-04-16T18:03:55.896343832Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 18:03:55 multinode-945500-m03 dockerd[663]: time="2024-04-16T18:03:55.922019708Z" level=info msg="Loading containers: start."
	Apr 16 18:03:56 multinode-945500-m03 dockerd[663]: time="2024-04-16T18:03:56.139515399Z" level=info msg="Loading containers: done."
	Apr 16 18:03:56 multinode-945500-m03 dockerd[663]: time="2024-04-16T18:03:56.155587843Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 18:03:56 multinode-945500-m03 dockerd[663]: time="2024-04-16T18:03:56.155848309Z" level=info msg="Daemon has completed initialization"
	Apr 16 18:03:56 multinode-945500-m03 systemd[1]: Started Docker Application Container Engine.
	Apr 16 18:03:56 multinode-945500-m03 dockerd[663]: time="2024-04-16T18:03:56.225017912Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 18:03:56 multinode-945500-m03 dockerd[663]: time="2024-04-16T18:03:56.225233366Z" level=info msg="API listen on [::]:2376"
	Apr 16 18:04:24 multinode-945500-m03 dockerd[663]: time="2024-04-16T18:04:24.498055828Z" level=info msg="Processing signal 'terminated'"
	Apr 16 18:04:24 multinode-945500-m03 dockerd[663]: time="2024-04-16T18:04:24.499668415Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 18:04:24 multinode-945500-m03 dockerd[663]: time="2024-04-16T18:04:24.500172843Z" level=info msg="Daemon shutdown complete"
	Apr 16 18:04:24 multinode-945500-m03 dockerd[663]: time="2024-04-16T18:04:24.500223746Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 18:04:24 multinode-945500-m03 dockerd[663]: time="2024-04-16T18:04:24.500236646Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 18:04:24 multinode-945500-m03 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 18:04:25 multinode-945500-m03 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 18:04:25 multinode-945500-m03 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 18:04:25 multinode-945500-m03 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 18:04:25 multinode-945500-m03 dockerd[1017]: time="2024-04-16T18:04:25.572271610Z" level=info msg="Starting up"
	Apr 16 18:05:25 multinode-945500-m03 dockerd[1017]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 16 18:05:25 multinode-945500-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 16 18:05:25 multinode-945500-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 16 18:05:25 multinode-945500-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0416 18:05:25.470006    6472 out.go:239] * 
	* 
	W0416 18:05:25.481941    6472 out.go:239] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_node_109417fbff9a3b9650da7ef19b4c6539dd55bbf9_4.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_node_109417fbff9a3b9650da7ef19b4c6539dd55bbf9_4.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 18:05:25.482617    6472 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-windows-amd64.exe node add -p multinode-945500 -v 3 --alsologtostderr" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-945500 -n multinode-945500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-945500 -n multinode-945500: (10.8470591s)
helpers_test.go:244: <<< TestMultiNode/serial/AddNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/AddNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-945500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-945500 logs -n 25: (7.4576014s)
helpers_test.go:252: TestMultiNode/serial/AddNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| mount   | C:\Users\jenkins.minikube5:/minikube-host         | mount-start-2-738600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:49 UTC |                     |
	|         | --profile mount-start-2-738600 --v 0              |                      |                   |                |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |                |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |                |                     |                     |
	|         |                                                 0 |                      |                   |                |                     |                     |
	| ssh     | mount-start-2-738600 ssh -- ls                    | mount-start-2-738600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:49 UTC | 16 Apr 24 17:49 UTC |
	|         | /minikube-host                                    |                      |                   |                |                     |                     |
	| delete  | -p mount-start-1-738600                           | mount-start-1-738600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:49 UTC | 16 Apr 24 17:50 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |                |                     |                     |
	| ssh     | mount-start-2-738600 ssh -- ls                    | mount-start-2-738600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:50 UTC | 16 Apr 24 17:50 UTC |
	|         | /minikube-host                                    |                      |                   |                |                     |                     |
	| stop    | -p mount-start-2-738600                           | mount-start-2-738600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:50 UTC | 16 Apr 24 17:50 UTC |
	| start   | -p mount-start-2-738600                           | mount-start-2-738600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:50 UTC |                     |
	| delete  | -p mount-start-2-738600                           | mount-start-2-738600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:53 UTC | 16 Apr 24 17:54 UTC |
	| delete  | -p mount-start-1-738600                           | mount-start-1-738600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:54 UTC | 16 Apr 24 17:54 UTC |
	| start   | -p multinode-945500                               | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:54 UTC | 16 Apr 24 18:00 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |                |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |                |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |                |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- apply -f                   | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- rollout                    | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | status deployment/busybox                         |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- get pods -o                | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- get pods -o                | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-jxvx2 --                       |                      |                   |                |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-ns8nx --                       |                      |                   |                |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-jxvx2 --                       |                      |                   |                |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-ns8nx --                       |                      |                   |                |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-jxvx2 -- nslookup              |                      |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-ns8nx -- nslookup              |                      |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- get pods -o                | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-jxvx2                          |                      |                   |                |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |                |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC |                     |
	|         | busybox-7fdf7869d9-jxvx2 -- sh                    |                      |                   |                |                     |                     |
	|         | -c ping -c 1 172.19.80.1                          |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-ns8nx                          |                      |                   |                |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |                |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC |                     |
	|         | busybox-7fdf7869d9-ns8nx -- sh                    |                      |                   |                |                     |                     |
	|         | -c ping -c 1 172.19.80.1                          |                      |                   |                |                     |                     |
	| node    | add -p multinode-945500 -v 3                      | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:02 UTC |                     |
	|         | --alsologtostderr                                 |                      |                   |                |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 17:54:38
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 17:54:38.458993    6988 out.go:291] Setting OutFile to fd 960 ...
	I0416 17:54:38.459581    6988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:54:38.459581    6988 out.go:304] Setting ErrFile to fd 676...
	I0416 17:54:38.459678    6988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:54:38.483191    6988 out.go:298] Setting JSON to false
	I0416 17:54:38.487192    6988 start.go:129] hostinfo: {"hostname":"minikube5","uptime":27708,"bootTime":1713262370,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0416 17:54:38.487192    6988 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0416 17:54:38.488186    6988 out.go:177] * [multinode-945500] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0416 17:54:38.489188    6988 notify.go:220] Checking for updates...
	I0416 17:54:38.489188    6988 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 17:54:38.490185    6988 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 17:54:38.490185    6988 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0416 17:54:38.491184    6988 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 17:54:38.491184    6988 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 17:54:38.493214    6988 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 17:54:43.355603    6988 out.go:177] * Using the hyperv driver based on user configuration
	I0416 17:54:43.356197    6988 start.go:297] selected driver: hyperv
	I0416 17:54:43.356197    6988 start.go:901] validating driver "hyperv" against <nil>
	I0416 17:54:43.356273    6988 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 17:54:43.396166    6988 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 17:54:43.397176    6988 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 17:54:43.397504    6988 cni.go:84] Creating CNI manager for ""
	I0416 17:54:43.397537    6988 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0416 17:54:43.397537    6988 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0416 17:54:43.397711    6988 start.go:340] cluster config:
	{Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:54:43.397711    6988 iso.go:125] acquiring lock: {Name:mka3f8eef32f5becd06d05d1d837c2a92a8fa70c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:54:43.399183    6988 out.go:177] * Starting "multinode-945500" primary control-plane node in "multinode-945500" cluster
	I0416 17:54:43.399538    6988 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 17:54:43.399538    6988 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0416 17:54:43.399538    6988 cache.go:56] Caching tarball of preloaded images
	I0416 17:54:43.399538    6988 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 17:54:43.400205    6988 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 17:54:43.400795    6988 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 17:54:43.401059    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json: {Name:mk67f15eab35e69a3277eb33417238e6d320045f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:54:43.401506    6988 start.go:360] acquireMachinesLock for multinode-945500: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 17:54:43.402049    6988 start.go:364] duration metric: took 542.9µs to acquireMachinesLock for "multinode-945500"
	I0416 17:54:43.402113    6988 start.go:93] Provisioning new machine with config: &{Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 17:54:43.402113    6988 start.go:125] createHost starting for "" (driver="hyperv")
	I0416 17:54:43.403221    6988 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 17:54:43.403542    6988 start.go:159] libmachine.API.Create for "multinode-945500" (driver="hyperv")
	I0416 17:54:43.403595    6988 client.go:168] LocalClient.Create starting
	I0416 17:54:43.404086    6988 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0416 17:54:43.404276    6988 main.go:141] libmachine: Decoding PEM data...
	I0416 17:54:43.404276    6988 main.go:141] libmachine: Parsing certificate...
	I0416 17:54:43.404276    6988 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0416 17:54:43.404276    6988 main.go:141] libmachine: Decoding PEM data...
	I0416 17:54:43.404276    6988 main.go:141] libmachine: Parsing certificate...
	I0416 17:54:43.404276    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0416 17:54:45.288246    6988 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0416 17:54:45.288342    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:54:45.288493    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0416 17:54:46.922912    6988 main.go:141] libmachine: [stdout =====>] : False
	
	I0416 17:54:46.922912    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:54:46.923010    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 17:54:48.270889    6988 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 17:54:48.270889    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:54:48.270889    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 17:54:51.466825    6988 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 17:54:51.466825    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:54:51.468671    6988 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 17:54:51.806641    6988 main.go:141] libmachine: Creating SSH key...
	I0416 17:54:52.035351    6988 main.go:141] libmachine: Creating VM...
	I0416 17:54:52.036345    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 17:54:54.656446    6988 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 17:54:54.656494    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:54:54.656633    6988 main.go:141] libmachine: Using switch "Default Switch"
	I0416 17:54:54.656633    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 17:54:56.229378    6988 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 17:54:56.229607    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:54:56.229607    6988 main.go:141] libmachine: Creating VHD
	I0416 17:54:56.229607    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0416 17:54:59.733727    6988 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5A486C23-0EFD-43D1-8BEB-4A60ACE1DF98
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0416 17:54:59.733800    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:54:59.733873    6988 main.go:141] libmachine: Writing magic tar header
	I0416 17:54:59.733915    6988 main.go:141] libmachine: Writing SSH key tar header
	I0416 17:54:59.741031    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0416 17:55:02.758991    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:02.758991    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:02.759271    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\disk.vhd' -SizeBytes 20000MB
	I0416 17:55:05.056217    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:05.056217    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:05.057316    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-945500 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0416 17:55:08.311574    6988 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-945500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0416 17:55:08.311574    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:08.311863    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-945500 -DynamicMemoryEnabled $false
	I0416 17:55:10.388584    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:10.389586    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:10.389586    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-945500 -Count 2
	I0416 17:55:12.413711    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:12.413711    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:12.414332    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-945500 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\boot2docker.iso'
	I0416 17:55:14.741711    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:14.741711    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:14.741711    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-945500 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\disk.vhd'
	I0416 17:55:17.003645    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:17.003645    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:17.003645    6988 main.go:141] libmachine: Starting VM...
	I0416 17:55:17.003645    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-945500
	I0416 17:55:19.573472    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:19.573700    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:19.573700    6988 main.go:141] libmachine: Waiting for host to start...
	I0416 17:55:19.573790    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:21.624051    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:21.624051    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:21.624771    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:55:23.884692    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:23.884692    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:24.892318    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:26.899190    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:26.899190    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:26.899348    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:55:29.176655    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:29.176655    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:30.177215    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:32.143102    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:32.143102    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:32.143464    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:55:34.404986    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:34.405261    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:35.419315    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:37.438553    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:37.438958    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:37.438958    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:55:39.692795    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:39.692795    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:40.700997    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:42.744138    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:42.744982    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:42.745064    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:55:45.083348    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:55:45.083348    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:45.083448    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:47.049900    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:47.050444    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:47.050523    6988 machine.go:94] provisionDockerMachine start ...
	I0416 17:55:47.050566    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:49.000414    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:49.000414    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:49.000537    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:55:51.284377    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:55:51.285296    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:51.290721    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:55:51.303784    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.227 22 <nil> <nil>}
	I0416 17:55:51.303784    6988 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 17:55:51.430251    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 17:55:51.430320    6988 buildroot.go:166] provisioning hostname "multinode-945500"
	I0416 17:55:51.430320    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:53.414239    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:53.414239    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:53.414512    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:55:55.729573    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:55:55.729573    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:55.733714    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:55:55.734245    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.227 22 <nil> <nil>}
	I0416 17:55:55.734245    6988 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-945500 && echo "multinode-945500" | sudo tee /etc/hostname
	I0416 17:55:55.888906    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-945500
	
	I0416 17:55:55.888975    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:57.782302    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:57.782302    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:57.782786    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:00.073834    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:00.073834    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:00.078560    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:56:00.078657    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.227 22 <nil> <nil>}
	I0416 17:56:00.078657    6988 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-945500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-945500/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-945500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 17:56:00.230030    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 17:56:00.230079    6988 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 17:56:00.230079    6988 buildroot.go:174] setting up certificates
	I0416 17:56:00.230079    6988 provision.go:84] configureAuth start
	I0416 17:56:00.230182    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:02.147449    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:02.147449    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:02.147449    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:04.449327    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:04.450388    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:04.450388    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:06.443860    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:06.443860    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:06.444760    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:08.814817    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:08.814817    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:08.814817    6988 provision.go:143] copyHostCerts
	I0416 17:56:08.815787    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 17:56:08.816004    6988 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 17:56:08.816004    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 17:56:08.816371    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 17:56:08.817376    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 17:56:08.817582    6988 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 17:56:08.817582    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 17:56:08.817582    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 17:56:08.818480    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 17:56:08.818480    6988 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 17:56:08.818480    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 17:56:08.819278    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 17:56:08.820184    6988 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-945500 san=[127.0.0.1 172.19.91.227 localhost minikube multinode-945500]
	I0416 17:56:09.120922    6988 provision.go:177] copyRemoteCerts
	I0416 17:56:09.129891    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 17:56:09.129891    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:11.105788    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:11.105788    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:11.105788    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:13.452243    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:13.452243    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:13.452604    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 17:56:13.553822    6988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.42368s)
	I0416 17:56:13.553822    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 17:56:13.553822    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 17:56:13.595187    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 17:56:13.595187    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0416 17:56:13.635052    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 17:56:13.635528    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 17:56:13.675952    6988 provision.go:87] duration metric: took 13.4440865s to configureAuth
	I0416 17:56:13.676049    6988 buildroot.go:189] setting minikube options for container-runtime
	I0416 17:56:13.676421    6988 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 17:56:13.676504    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:15.610838    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:15.610926    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:15.610926    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:17.912484    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:17.913491    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:17.916946    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:56:17.917531    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.227 22 <nil> <nil>}
	I0416 17:56:17.917531    6988 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 17:56:18.061063    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 17:56:18.061063    6988 buildroot.go:70] root file system type: tmpfs
	I0416 17:56:18.061690    6988 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 17:56:18.061690    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:20.049603    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:20.049603    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:20.049978    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:22.383521    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:22.383521    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:22.387896    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:56:22.388601    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.227 22 <nil> <nil>}
	I0416 17:56:22.388601    6988 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 17:56:22.561164    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 17:56:22.561269    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:24.443674    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:24.444091    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:24.444193    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:26.758959    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:26.758959    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:26.765429    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:56:26.765429    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.227 22 <nil> <nil>}
	I0416 17:56:26.765957    6988 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 17:56:28.704221    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 17:56:28.704221    6988 machine.go:97] duration metric: took 41.6513356s to provisionDockerMachine
	I0416 17:56:28.704317    6988 client.go:171] duration metric: took 1m45.2947032s to LocalClient.Create
	I0416 17:56:28.704398    6988 start.go:167] duration metric: took 1m45.2948041s to libmachine.API.Create "multinode-945500"
	I0416 17:56:28.704398    6988 start.go:293] postStartSetup for "multinode-945500" (driver="hyperv")
	I0416 17:56:28.704489    6988 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 17:56:28.714148    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 17:56:28.714148    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:30.638973    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:30.638973    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:30.639089    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:32.961564    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:32.961564    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:32.961564    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 17:56:33.069322    6988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3549265s)
	I0416 17:56:33.078710    6988 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 17:56:33.085331    6988 command_runner.go:130] > NAME=Buildroot
	I0416 17:56:33.085331    6988 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0416 17:56:33.085331    6988 command_runner.go:130] > ID=buildroot
	I0416 17:56:33.085331    6988 command_runner.go:130] > VERSION_ID=2023.02.9
	I0416 17:56:33.085331    6988 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0416 17:56:33.086070    6988 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 17:56:33.086171    6988 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 17:56:33.086945    6988 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 17:56:33.088129    6988 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 17:56:33.088129    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 17:56:33.106615    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 17:56:33.129263    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 17:56:33.174677    6988 start.go:296] duration metric: took 4.469934s for postStartSetup
	I0416 17:56:33.177364    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:35.133709    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:35.133709    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:35.133796    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:37.452577    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:37.452577    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:37.453529    6988 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 17:56:37.455914    6988 start.go:128] duration metric: took 1m54.0472303s to createHost
	I0416 17:56:37.455914    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:39.425449    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:39.425449    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:39.426011    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:41.744115    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:41.744115    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:41.748497    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:56:41.748631    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.227 22 <nil> <nil>}
	I0416 17:56:41.748631    6988 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 17:56:41.875115    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713290202.039643702
	
	I0416 17:56:41.875272    6988 fix.go:216] guest clock: 1713290202.039643702
	I0416 17:56:41.875272    6988 fix.go:229] Guest: 2024-04-16 17:56:42.039643702 +0000 UTC Remote: 2024-04-16 17:56:37.4559145 +0000 UTC m=+119.121500601 (delta=4.583729202s)
	I0416 17:56:41.875399    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:43.872191    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:43.873117    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:43.873117    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:46.207797    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:46.207797    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:46.213575    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:56:46.213575    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.227 22 <nil> <nil>}
	I0416 17:56:46.213575    6988 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713290201
	I0416 17:56:46.370971    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 17:56:41 UTC 2024
	
	I0416 17:56:46.370971    6988 fix.go:236] clock set: Tue Apr 16 17:56:41 UTC 2024
	 (err=<nil>)
	I0416 17:56:46.371058    6988 start.go:83] releasing machines lock for "multinode-945500", held for 2m2.9620339s
	I0416 17:56:46.371284    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:48.308157    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:48.308984    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:48.309041    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:50.575031    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:50.575031    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:50.579218    6988 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 17:56:50.579218    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:50.586441    6988 ssh_runner.go:195] Run: cat /version.json
	I0416 17:56:50.586979    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:52.634472    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:52.634472    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:52.634472    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:52.639621    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:52.639621    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:52.639621    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:55.047917    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:55.048488    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:55.048917    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 17:56:55.065759    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:55.066462    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:55.066602    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 17:56:55.354145    6988 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0416 17:56:55.354145    6988 command_runner.go:130] > {"iso_version": "v1.33.0-1713236417-18649", "kicbase_version": "v0.0.43-1713215244-18647", "minikube_version": "v1.33.0-beta.0", "commit": "4ec1a3e88a9f3ffb3930e555284d907468ae83a6"}
	I0416 17:56:55.354145    6988 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.7746557s)
	I0416 17:56:55.354145    6988 ssh_runner.go:235] Completed: cat /version.json: (4.7668953s)
	I0416 17:56:55.366453    6988 ssh_runner.go:195] Run: systemctl --version
	I0416 17:56:55.375220    6988 command_runner.go:130] > systemd 252 (252)
	I0416 17:56:55.375220    6988 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0416 17:56:55.384285    6988 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0416 17:56:55.392020    6988 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0416 17:56:55.392567    6988 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 17:56:55.401209    6988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 17:56:55.426637    6988 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0416 17:56:55.427403    6988 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 17:56:55.427503    6988 start.go:494] detecting cgroup driver to use...
	I0416 17:56:55.427534    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 17:56:55.457110    6988 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0416 17:56:55.470104    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 17:56:55.494070    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 17:56:55.511268    6988 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 17:56:55.523954    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 17:56:55.549161    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 17:56:55.576216    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 17:56:55.602400    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 17:56:55.630572    6988 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 17:56:55.656816    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 17:56:55.683825    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 17:56:55.710767    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 17:56:55.737864    6988 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 17:56:55.753678    6988 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0416 17:56:55.761926    6988 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 17:56:55.794919    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:56:55.964839    6988 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 17:56:55.993258    6988 start.go:494] detecting cgroup driver to use...
	I0416 17:56:56.002807    6988 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 17:56:56.020460    6988 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0416 17:56:56.020914    6988 command_runner.go:130] > [Unit]
	I0416 17:56:56.020998    6988 command_runner.go:130] > Description=Docker Application Container Engine
	I0416 17:56:56.020998    6988 command_runner.go:130] > Documentation=https://docs.docker.com
	I0416 17:56:56.020998    6988 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0416 17:56:56.020998    6988 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0416 17:56:56.021071    6988 command_runner.go:130] > StartLimitBurst=3
	I0416 17:56:56.021071    6988 command_runner.go:130] > StartLimitIntervalSec=60
	I0416 17:56:56.021071    6988 command_runner.go:130] > [Service]
	I0416 17:56:56.021071    6988 command_runner.go:130] > Type=notify
	I0416 17:56:56.021071    6988 command_runner.go:130] > Restart=on-failure
	I0416 17:56:56.021071    6988 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0416 17:56:56.021156    6988 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0416 17:56:56.021156    6988 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0416 17:56:56.021156    6988 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0416 17:56:56.021241    6988 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0416 17:56:56.021281    6988 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0416 17:56:56.021354    6988 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0416 17:56:56.021427    6988 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0416 17:56:56.021427    6988 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0416 17:56:56.021427    6988 command_runner.go:130] > ExecStart=
	I0416 17:56:56.021508    6988 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0416 17:56:56.021508    6988 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0416 17:56:56.021586    6988 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0416 17:56:56.021586    6988 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0416 17:56:56.021663    6988 command_runner.go:130] > LimitNOFILE=infinity
	I0416 17:56:56.021663    6988 command_runner.go:130] > LimitNPROC=infinity
	I0416 17:56:56.021663    6988 command_runner.go:130] > LimitCORE=infinity
	I0416 17:56:56.021663    6988 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0416 17:56:56.021663    6988 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0416 17:56:56.021738    6988 command_runner.go:130] > TasksMax=infinity
	I0416 17:56:56.021738    6988 command_runner.go:130] > TimeoutStartSec=0
	I0416 17:56:56.021738    6988 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0416 17:56:56.021738    6988 command_runner.go:130] > Delegate=yes
	I0416 17:56:56.021738    6988 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0416 17:56:56.021811    6988 command_runner.go:130] > KillMode=process
	I0416 17:56:56.021811    6988 command_runner.go:130] > [Install]
	I0416 17:56:56.021811    6988 command_runner.go:130] > WantedBy=multi-user.target
	I0416 17:56:56.032694    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 17:56:56.060059    6988 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 17:56:56.101716    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 17:56:56.131287    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 17:56:56.163190    6988 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 17:56:56.210983    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 17:56:56.231971    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 17:56:56.261397    6988 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0416 17:56:56.272666    6988 ssh_runner.go:195] Run: which cri-dockerd
	I0416 17:56:56.276995    6988 command_runner.go:130] > /usr/bin/cri-dockerd
	I0416 17:56:56.286591    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 17:56:56.299870    6988 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 17:56:56.337571    6988 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 17:56:56.500406    6988 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 17:56:56.646617    6988 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 17:56:56.646617    6988 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 17:56:56.690996    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:56:56.871261    6988 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 17:56:59.295937    6988 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4242935s)
	I0416 17:56:59.304599    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0416 17:56:59.333610    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 17:56:59.361657    6988 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0416 17:56:59.541548    6988 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0416 17:56:59.705672    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:56:59.866404    6988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0416 17:56:59.907640    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 17:56:59.939748    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:57:00.107406    6988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0416 17:57:00.200852    6988 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0416 17:57:00.212214    6988 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0416 17:57:00.220777    6988 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0416 17:57:00.220777    6988 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0416 17:57:00.220777    6988 command_runner.go:130] > Device: 0,22	Inode: 885         Links: 1
	I0416 17:57:00.220777    6988 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0416 17:57:00.220777    6988 command_runner.go:130] > Access: 2024-04-16 17:57:00.296362377 +0000
	I0416 17:57:00.220777    6988 command_runner.go:130] > Modify: 2024-04-16 17:57:00.296362377 +0000
	I0416 17:57:00.220777    6988 command_runner.go:130] > Change: 2024-04-16 17:57:00.300362562 +0000
	I0416 17:57:00.220777    6988 command_runner.go:130] >  Birth: -
	I0416 17:57:00.220777    6988 start.go:562] Will wait 60s for crictl version
	I0416 17:57:00.230775    6988 ssh_runner.go:195] Run: which crictl
	I0416 17:57:00.235786    6988 command_runner.go:130] > /usr/bin/crictl
	I0416 17:57:00.245023    6988 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 17:57:00.292622    6988 command_runner.go:130] > Version:  0.1.0
	I0416 17:57:00.292622    6988 command_runner.go:130] > RuntimeName:  docker
	I0416 17:57:00.292622    6988 command_runner.go:130] > RuntimeVersion:  26.0.1
	I0416 17:57:00.292739    6988 command_runner.go:130] > RuntimeApiVersion:  v1
	I0416 17:57:00.292794    6988 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0416 17:57:00.301388    6988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 17:57:00.331067    6988 command_runner.go:130] > 26.0.1
	I0416 17:57:00.337439    6988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 17:57:00.365025    6988 command_runner.go:130] > 26.0.1
	I0416 17:57:00.367212    6988 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
	I0416 17:57:00.367413    6988 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0416 17:57:00.371515    6988 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0416 17:57:00.371597    6988 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0416 17:57:00.371597    6988 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0416 17:57:00.371597    6988 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:6f:a4 Flags:up|broadcast|multicast|running}
	I0416 17:57:00.374158    6988 ip.go:210] interface addr: fe80::6b96:eca7:5afa:def5/64
	I0416 17:57:00.374158    6988 ip.go:210] interface addr: 172.19.80.1/20
	I0416 17:57:00.380883    6988 ssh_runner.go:195] Run: grep 172.19.80.1	host.minikube.internal$ /etc/hosts
	I0416 17:57:00.386921    6988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 17:57:00.407839    6988 kubeadm.go:877] updating cluster {Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.29.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.91.227 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 17:57:00.407839    6988 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 17:57:00.416191    6988 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 17:57:00.437198    6988 docker.go:685] Got preloaded images: 
	I0416 17:57:00.437198    6988 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0416 17:57:00.446472    6988 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0416 17:57:00.461564    6988 command_runner.go:139] > {"Repositories":{}}
	I0416 17:57:00.472373    6988 ssh_runner.go:195] Run: which lz4
	I0416 17:57:00.477412    6988 command_runner.go:130] > /usr/bin/lz4
	I0416 17:57:00.477412    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0416 17:57:00.487276    6988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 17:57:00.492861    6988 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 17:57:00.493543    6988 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 17:57:00.493600    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0416 17:57:01.970587    6988 docker.go:649] duration metric: took 1.4924844s to copy over tarball
	I0416 17:57:01.979028    6988 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 17:57:10.810575    6988 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.831045s)
	I0416 17:57:10.810689    6988 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 17:57:10.875450    6988 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0416 17:57:10.895935    6988 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.29.3":"sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533","registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c":"sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.29.3":"sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3","registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104":"sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.29.3":"sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392","registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863":"sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b
5bbe4f71784e392"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.29.3":"sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b","registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a":"sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0416 17:57:10.895935    6988 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0416 17:57:10.938742    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:57:11.136149    6988 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 17:57:13.733531    6988 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5972349s)
	I0416 17:57:13.742898    6988 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 17:57:13.765918    6988 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.29.3
	I0416 17:57:13.765918    6988 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.29.3
	I0416 17:57:13.765918    6988 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.29.3
	I0416 17:57:13.765918    6988 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.29.3
	I0416 17:57:13.765918    6988 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0416 17:57:13.765918    6988 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0416 17:57:13.765918    6988 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0416 17:57:13.765918    6988 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:57:13.765918    6988 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0416 17:57:13.765918    6988 cache_images.go:84] Images are preloaded, skipping loading
	I0416 17:57:13.765918    6988 kubeadm.go:928] updating node { 172.19.91.227 8443 v1.29.3 docker true true} ...
	I0416 17:57:13.766906    6988 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-945500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.91.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 17:57:13.774901    6988 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0416 17:57:13.804585    6988 command_runner.go:130] > cgroupfs
	I0416 17:57:13.804682    6988 cni.go:84] Creating CNI manager for ""
	I0416 17:57:13.804682    6988 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0416 17:57:13.804682    6988 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 17:57:13.804682    6988 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.91.227 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-945500 NodeName:multinode-945500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.91.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.91.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 17:57:13.804682    6988 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.91.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-945500"
	  kubeletExtraArgs:
	    node-ip: 172.19.91.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.91.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 17:57:13.813761    6988 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 17:57:13.830081    6988 command_runner.go:130] > kubeadm
	I0416 17:57:13.830081    6988 command_runner.go:130] > kubectl
	I0416 17:57:13.830081    6988 command_runner.go:130] > kubelet
	I0416 17:57:13.830165    6988 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 17:57:13.838770    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 17:57:13.852826    6988 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0416 17:57:13.878799    6988 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 17:57:13.905862    6988 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0416 17:57:13.943017    6988 ssh_runner.go:195] Run: grep 172.19.91.227	control-plane.minikube.internal$ /etc/hosts
	I0416 17:57:13.949214    6988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.91.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 17:57:13.980273    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:57:14.153644    6988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 17:57:14.177658    6988 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500 for IP: 172.19.91.227
	I0416 17:57:14.178687    6988 certs.go:194] generating shared ca certs ...
	I0416 17:57:14.178687    6988 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:14.179455    6988 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0416 17:57:14.179902    6988 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0416 17:57:14.180190    6988 certs.go:256] generating profile certs ...
	I0416 17:57:14.180755    6988 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\client.key
	I0416 17:57:14.180755    6988 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\client.crt with IP's: []
	I0416 17:57:14.411174    6988 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\client.crt ...
	I0416 17:57:14.411174    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\client.crt: {Name:mkc0623b015c4c96d85b8b3b13eb2cc6d3ac8763 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:14.412171    6988 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\client.key ...
	I0416 17:57:14.412171    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\client.key: {Name:mkbd9c01c6892e02b0a8d9c434e98a742e87c2cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:14.413058    6988 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key.e3ea85af
	I0416 17:57:14.414154    6988 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt.e3ea85af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.91.227]
	I0416 17:57:14.575473    6988 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt.e3ea85af ...
	I0416 17:57:14.575473    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt.e3ea85af: {Name:mk62c37573433811afa986b89a237b6c7bb0d1df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:14.576358    6988 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key.e3ea85af ...
	I0416 17:57:14.576358    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key.e3ea85af: {Name:mk6c23ff826064c327d5a977affe1877b10d9b72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:14.577574    6988 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt.e3ea85af -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt
	I0416 17:57:14.590486    6988 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key.e3ea85af -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key
	I0416 17:57:14.590795    6988 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.key
	I0416 17:57:14.590795    6988 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.crt with IP's: []
	I0416 17:57:14.794779    6988 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.crt ...
	I0416 17:57:14.795779    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.crt: {Name:mk40c9063a89a73b56bd4ccd89e15d6559ba1e37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:14.796782    6988 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.key ...
	I0416 17:57:14.796782    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.key: {Name:mk5e95084b6a4adeb7806da3f2d851d8919dced5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:14.798528    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 17:57:14.798760    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0416 17:57:14.799041    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 17:57:14.799237    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 17:57:14.799423    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0416 17:57:14.799630    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0416 17:57:14.799827    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0416 17:57:14.806003    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0416 17:57:14.809977    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem (1338 bytes)
	W0416 17:57:14.809977    6988 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324_empty.pem, impossibly tiny 0 bytes
	I0416 17:57:14.809977    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0416 17:57:14.809977    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0416 17:57:14.811027    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0416 17:57:14.811027    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0416 17:57:14.811027    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem (1708 bytes)
	I0416 17:57:14.811551    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /usr/share/ca-certificates/53242.pem
	I0416 17:57:14.811650    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:57:14.811737    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem -> /usr/share/ca-certificates/5324.pem
	I0416 17:57:14.812935    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 17:57:14.852949    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 17:57:14.891959    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 17:57:14.931152    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 17:57:14.968412    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0416 17:57:15.008983    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 17:57:15.048515    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 17:57:15.089091    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 17:57:15.125356    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /usr/share/ca-certificates/53242.pem (1708 bytes)
	I0416 17:57:15.162621    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 17:57:15.205246    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem --> /usr/share/ca-certificates/5324.pem (1338 bytes)
	I0416 17:57:15.248985    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 17:57:15.289002    6988 ssh_runner.go:195] Run: openssl version
	I0416 17:57:15.296351    6988 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0416 17:57:15.308333    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 17:57:15.335334    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:57:15.341349    6988 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 16 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:57:15.342189    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:57:15.351026    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:57:15.358591    6988 command_runner.go:130] > b5213941
	I0416 17:57:15.367034    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 17:57:15.391467    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5324.pem && ln -fs /usr/share/ca-certificates/5324.pem /etc/ssl/certs/5324.pem"
	I0416 17:57:15.416387    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5324.pem
	I0416 17:57:15.423831    6988 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 16 16:35 /usr/share/ca-certificates/5324.pem
	I0416 17:57:15.423957    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:35 /usr/share/ca-certificates/5324.pem
	I0416 17:57:15.434442    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5324.pem
	I0416 17:57:15.442459    6988 command_runner.go:130] > 51391683
	I0416 17:57:15.451530    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5324.pem /etc/ssl/certs/51391683.0"
	I0416 17:57:15.480393    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/53242.pem && ln -fs /usr/share/ca-certificates/53242.pem /etc/ssl/certs/53242.pem"
	I0416 17:57:15.509124    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/53242.pem
	I0416 17:57:15.515721    6988 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 16 16:35 /usr/share/ca-certificates/53242.pem
	I0416 17:57:15.515827    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:35 /usr/share/ca-certificates/53242.pem
	I0416 17:57:15.524021    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/53242.pem
	I0416 17:57:15.533694    6988 command_runner.go:130] > 3ec20f2e
	I0416 17:57:15.541647    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/53242.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 17:57:15.567570    6988 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 17:57:15.573415    6988 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 17:57:15.573840    6988 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 17:57:15.574281    6988 kubeadm.go:391] StartCluster: {Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
9.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.91.227 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:57:15.580506    6988 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0416 17:57:15.612292    6988 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0416 17:57:15.627466    6988 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0416 17:57:15.628097    6988 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0416 17:57:15.628097    6988 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0416 17:57:15.635032    6988 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 17:57:15.660479    6988 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 17:57:15.676695    6988 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0416 17:57:15.676792    6988 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0416 17:57:15.676792    6988 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0416 17:57:15.676855    6988 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 17:57:15.676918    6988 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 17:57:15.676973    6988 kubeadm.go:156] found existing configuration files:
	
	I0416 17:57:15.684985    6988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 17:57:15.700012    6988 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 17:57:15.700126    6988 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 17:57:15.708938    6988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 17:57:15.734829    6988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 17:57:15.747861    6988 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 17:57:15.748201    6988 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 17:57:15.756696    6988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 17:57:15.784559    6988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 17:57:15.804131    6988 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 17:57:15.804131    6988 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 17:57:15.815130    6988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 17:57:15.838118    6988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 17:57:15.854130    6988 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 17:57:15.854130    6988 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 17:57:15.862912    6988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 17:57:15.876128    6988 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 17:57:16.053541    6988 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 17:57:16.053541    6988 command_runner.go:130] > [init] Using Kubernetes version: v1.29.3
	I0416 17:57:16.053865    6988 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 17:57:16.053865    6988 command_runner.go:130] > [preflight] Running pre-flight checks
	I0416 17:57:16.200461    6988 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 17:57:16.200461    6988 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 17:57:16.200461    6988 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 17:57:16.200461    6988 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 17:57:16.200461    6988 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 17:57:16.200461    6988 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 17:57:16.451494    6988 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 17:57:16.452473    6988 out.go:204]   - Generating certificates and keys ...
	I0416 17:57:16.451494    6988 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 17:57:16.453479    6988 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 17:57:16.453479    6988 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0416 17:57:16.453479    6988 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0416 17:57:16.453479    6988 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 17:57:16.705308    6988 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0416 17:57:16.705409    6988 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0416 17:57:16.859312    6988 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0416 17:57:16.859312    6988 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0416 17:57:17.049120    6988 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0416 17:57:17.049237    6988 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0416 17:57:17.314616    6988 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0416 17:57:17.314728    6988 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0416 17:57:17.509835    6988 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0416 17:57:17.509835    6988 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0416 17:57:17.510247    6988 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-945500] and IPs [172.19.91.227 127.0.0.1 ::1]
	I0416 17:57:17.510247    6988 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-945500] and IPs [172.19.91.227 127.0.0.1 ::1]
	I0416 17:57:17.791919    6988 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0416 17:57:17.791919    6988 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0416 17:57:17.792356    6988 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-945500] and IPs [172.19.91.227 127.0.0.1 ::1]
	I0416 17:57:17.792356    6988 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-945500] and IPs [172.19.91.227 127.0.0.1 ::1]
	I0416 17:57:17.995022    6988 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0416 17:57:17.995106    6988 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0416 17:57:18.220639    6988 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0416 17:57:18.220729    6988 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0416 17:57:18.582174    6988 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0416 17:57:18.582274    6988 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0416 17:57:18.582480    6988 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 17:57:18.582554    6988 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 17:57:18.743963    6988 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 17:57:18.744564    6988 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 17:57:19.067769    6988 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 17:57:19.068120    6988 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 17:57:19.240331    6988 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 17:57:19.240672    6988 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 17:57:19.461195    6988 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 17:57:19.461195    6988 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 17:57:19.652943    6988 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 17:57:19.653442    6988 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 17:57:19.654516    6988 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 17:57:19.654516    6988 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 17:57:19.660559    6988 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 17:57:19.660559    6988 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 17:57:19.661534    6988 out.go:204]   - Booting up control plane ...
	I0416 17:57:19.661534    6988 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 17:57:19.661534    6988 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 17:57:19.662544    6988 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 17:57:19.662544    6988 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 17:57:19.663540    6988 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 17:57:19.663540    6988 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 17:57:19.684534    6988 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 17:57:19.685153    6988 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 17:57:19.687532    6988 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 17:57:19.687532    6988 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 17:57:19.687532    6988 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 17:57:19.687532    6988 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0416 17:57:19.860703    6988 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 17:57:19.860788    6988 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 17:57:26.366044    6988 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.507200 seconds
	I0416 17:57:26.366044    6988 command_runner.go:130] > [apiclient] All control plane components are healthy after 6.507200 seconds
	I0416 17:57:26.385213    6988 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 17:57:26.385213    6988 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 17:57:26.408456    6988 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 17:57:26.408456    6988 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 17:57:26.942416    6988 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0416 17:57:26.942416    6988 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 17:57:26.943198    6988 kubeadm.go:309] [mark-control-plane] Marking the node multinode-945500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 17:57:26.943369    6988 command_runner.go:130] > [mark-control-plane] Marking the node multinode-945500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 17:57:27.456093    6988 kubeadm.go:309] [bootstrap-token] Using token: v7bkxo.pzxgmh7iiytdovwq
	I0416 17:57:27.456235    6988 command_runner.go:130] > [bootstrap-token] Using token: v7bkxo.pzxgmh7iiytdovwq
	I0416 17:57:27.456953    6988 out.go:204]   - Configuring RBAC rules ...
	I0416 17:57:27.457407    6988 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 17:57:27.457407    6988 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 17:57:27.473244    6988 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 17:57:27.473244    6988 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 17:57:27.485961    6988 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 17:57:27.486019    6988 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 17:57:27.492510    6988 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 17:57:27.492510    6988 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 17:57:27.496129    6988 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 17:57:27.496129    6988 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 17:57:27.501092    6988 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 17:57:27.501753    6988 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 17:57:27.517045    6988 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 17:57:27.517045    6988 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 17:57:27.829288    6988 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 17:57:27.829833    6988 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0416 17:57:27.880030    6988 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 17:57:27.880030    6988 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0416 17:57:27.883021    6988 kubeadm.go:309] 
	I0416 17:57:27.883395    6988 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0416 17:57:27.883467    6988 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 17:57:27.883558    6988 kubeadm.go:309] 
	I0416 17:57:27.883809    6988 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 17:57:27.883809    6988 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0416 17:57:27.883877    6988 kubeadm.go:309] 
	I0416 17:57:27.883877    6988 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 17:57:27.883877    6988 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0416 17:57:27.883877    6988 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 17:57:27.883877    6988 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 17:57:27.883877    6988 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 17:57:27.883877    6988 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 17:57:27.883877    6988 kubeadm.go:309] 
	I0416 17:57:27.883877    6988 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0416 17:57:27.883877    6988 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 17:57:27.883877    6988 kubeadm.go:309] 
	I0416 17:57:27.884765    6988 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 17:57:27.884765    6988 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 17:57:27.884765    6988 kubeadm.go:309] 
	I0416 17:57:27.884765    6988 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 17:57:27.884765    6988 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0416 17:57:27.884765    6988 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 17:57:27.884765    6988 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 17:57:27.884765    6988 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 17:57:27.884765    6988 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 17:57:27.884765    6988 kubeadm.go:309] 
	I0416 17:57:27.884765    6988 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0416 17:57:27.884765    6988 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 17:57:27.885775    6988 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0416 17:57:27.885775    6988 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 17:57:27.885775    6988 kubeadm.go:309] 
	I0416 17:57:27.885775    6988 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token v7bkxo.pzxgmh7iiytdovwq \
	I0416 17:57:27.885775    6988 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token v7bkxo.pzxgmh7iiytdovwq \
	I0416 17:57:27.885775    6988 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c \
	I0416 17:57:27.885775    6988 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c \
	I0416 17:57:27.885775    6988 kubeadm.go:309] 	--control-plane 
	I0416 17:57:27.885775    6988 command_runner.go:130] > 	--control-plane 
	I0416 17:57:27.885775    6988 kubeadm.go:309] 
	I0416 17:57:27.886749    6988 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0416 17:57:27.886749    6988 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 17:57:27.886749    6988 kubeadm.go:309] 
	I0416 17:57:27.886749    6988 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token v7bkxo.pzxgmh7iiytdovwq \
	I0416 17:57:27.886749    6988 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token v7bkxo.pzxgmh7iiytdovwq \
	I0416 17:57:27.886749    6988 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c 
	I0416 17:57:27.886749    6988 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c 
	I0416 17:57:27.886749    6988 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 17:57:27.887747    6988 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 17:57:27.887747    6988 cni.go:84] Creating CNI manager for ""
	I0416 17:57:27.887747    6988 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0416 17:57:27.888782    6988 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0416 17:57:27.898776    6988 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0416 17:57:27.906367    6988 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0416 17:57:27.906367    6988 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0416 17:57:27.906446    6988 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0416 17:57:27.906446    6988 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0416 17:57:27.906446    6988 command_runner.go:130] > Access: 2024-04-16 17:55:43.845708000 +0000
	I0416 17:57:27.906446    6988 command_runner.go:130] > Modify: 2024-04-16 08:43:32.000000000 +0000
	I0416 17:57:27.906446    6988 command_runner.go:130] > Change: 2024-04-16 17:55:34.250000000 +0000
	I0416 17:57:27.906446    6988 command_runner.go:130] >  Birth: -
	I0416 17:57:27.906446    6988 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0416 17:57:27.906446    6988 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0416 17:57:27.988519    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0416 17:57:28.490851    6988 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0416 17:57:28.498847    6988 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0416 17:57:28.511858    6988 command_runner.go:130] > serviceaccount/kindnet created
	I0416 17:57:28.523843    6988 command_runner.go:130] > daemonset.apps/kindnet created
	I0416 17:57:28.526917    6988 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 17:57:28.536843    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:28.538723    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-945500 minikube.k8s.io/updated_at=2024_04_16T17_57_28_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4 minikube.k8s.io/name=multinode-945500 minikube.k8s.io/primary=true
	I0416 17:57:28.553542    6988 command_runner.go:130] > -16
	I0416 17:57:28.553542    6988 ops.go:34] apiserver oom_adj: -16
	I0416 17:57:28.663066    6988 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0416 17:57:28.672472    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:28.703696    6988 command_runner.go:130] > node/multinode-945500 labeled
	I0416 17:57:28.779726    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:29.176642    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:29.310699    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:29.688820    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:29.783095    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:30.180137    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:30.283623    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:30.677902    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:30.770542    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:31.173788    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:31.267177    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:31.681339    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:31.776737    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:32.179098    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:32.275419    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:32.685593    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:32.784034    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:33.184934    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:33.284755    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:33.689894    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:33.786322    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:34.177543    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:34.278089    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:34.688074    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:34.788843    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:35.176613    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:35.278146    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:35.690652    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:35.790109    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:36.185543    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:36.283203    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:36.685087    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:36.787681    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:37.183826    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:37.287103    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:37.686779    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:37.790505    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:38.186663    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:38.313330    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:38.690145    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:38.792194    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:39.188096    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:39.307296    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:39.673049    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:39.777746    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:40.175109    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:40.317376    6988 command_runner.go:130] > NAME      SECRETS   AGE
	I0416 17:57:40.317525    6988 command_runner.go:130] > default   0         0s
	I0416 17:57:40.317525    6988 kubeadm.go:1107] duration metric: took 11.7899387s to wait for elevateKubeSystemPrivileges
	W0416 17:57:40.317725    6988 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 17:57:40.317725    6988 kubeadm.go:393] duration metric: took 24.7420862s to StartCluster
	I0416 17:57:40.317841    6988 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:40.318068    6988 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 17:57:40.320080    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:40.321302    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0416 17:57:40.321470    6988 start.go:234] Will wait 6m0s for node &{Name: IP:172.19.91.227 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 17:57:40.321470    6988 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 17:57:40.321614    6988 addons.go:69] Setting storage-provisioner=true in profile "multinode-945500"
	I0416 17:57:40.321614    6988 addons.go:234] Setting addon storage-provisioner=true in "multinode-945500"
	I0416 17:57:40.321614    6988 host.go:66] Checking if "multinode-945500" exists ...
	I0416 17:57:40.321614    6988 addons.go:69] Setting default-storageclass=true in profile "multinode-945500"
	I0416 17:57:40.321614    6988 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-945500"
	I0416 17:57:40.321614    6988 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 17:57:40.322690    6988 out.go:177] * Verifying Kubernetes components...
	I0416 17:57:40.322606    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:57:40.322690    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:57:40.336146    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:57:40.543940    6988 command_runner.go:130] > apiVersion: v1
	I0416 17:57:40.544012    6988 command_runner.go:130] > data:
	I0416 17:57:40.544012    6988 command_runner.go:130] >   Corefile: |
	I0416 17:57:40.544012    6988 command_runner.go:130] >     .:53 {
	I0416 17:57:40.544012    6988 command_runner.go:130] >         errors
	I0416 17:57:40.544012    6988 command_runner.go:130] >         health {
	I0416 17:57:40.544088    6988 command_runner.go:130] >            lameduck 5s
	I0416 17:57:40.544088    6988 command_runner.go:130] >         }
	I0416 17:57:40.544088    6988 command_runner.go:130] >         ready
	I0416 17:57:40.544112    6988 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0416 17:57:40.544112    6988 command_runner.go:130] >            pods insecure
	I0416 17:57:40.544112    6988 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0416 17:57:40.544112    6988 command_runner.go:130] >            ttl 30
	I0416 17:57:40.544112    6988 command_runner.go:130] >         }
	I0416 17:57:40.544112    6988 command_runner.go:130] >         prometheus :9153
	I0416 17:57:40.544112    6988 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0416 17:57:40.544191    6988 command_runner.go:130] >            max_concurrent 1000
	I0416 17:57:40.544191    6988 command_runner.go:130] >         }
	I0416 17:57:40.544191    6988 command_runner.go:130] >         cache 30
	I0416 17:57:40.544191    6988 command_runner.go:130] >         loop
	I0416 17:57:40.544191    6988 command_runner.go:130] >         reload
	I0416 17:57:40.544191    6988 command_runner.go:130] >         loadbalance
	I0416 17:57:40.544191    6988 command_runner.go:130] >     }
	I0416 17:57:40.544191    6988 command_runner.go:130] > kind: ConfigMap
	I0416 17:57:40.544191    6988 command_runner.go:130] > metadata:
	I0416 17:57:40.544191    6988 command_runner.go:130] >   creationTimestamp: "2024-04-16T17:57:27Z"
	I0416 17:57:40.544191    6988 command_runner.go:130] >   name: coredns
	I0416 17:57:40.544191    6988 command_runner.go:130] >   namespace: kube-system
	I0416 17:57:40.544296    6988 command_runner.go:130] >   resourceVersion: "274"
	I0416 17:57:40.544296    6988 command_runner.go:130] >   uid: 8b9b71a6-9315-41d9-b055-6f10c4c901fd
	I0416 17:57:40.544483    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.80.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0416 17:57:40.652097    6988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 17:57:40.902041    6988 command_runner.go:130] > configmap/coredns replaced
	I0416 17:57:40.905269    6988 start.go:946] {"host.minikube.internal": 172.19.80.1} host record injected into CoreDNS's ConfigMap
	I0416 17:57:40.906408    6988 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 17:57:40.906594    6988 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 17:57:40.907054    6988 kapi.go:59] client config for multinode-945500: &rest.Config{Host:"https://172.19.91.227:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef16c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 17:57:40.907195    6988 kapi.go:59] client config for multinode-945500: &rest.Config{Host:"https://172.19.91.227:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef16c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 17:57:40.908042    6988 cert_rotation.go:137] Starting client certificate rotation controller
	I0416 17:57:40.908659    6988 node_ready.go:35] waiting up to 6m0s for node "multinode-945500" to be "Ready" ...
	I0416 17:57:40.908860    6988 round_trippers.go:463] GET https://172.19.91.227:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0416 17:57:40.908860    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:40.908860    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:40.908860    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:40.908860    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:40.908860    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:40.908955    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:40.908955    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:40.937154    6988 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0416 17:57:40.937516    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:40.937516    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:40.937516    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:40.937516    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:40.937516    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:41 GMT
	I0416 17:57:40.937516    6988 round_trippers.go:580]     Audit-Id: e2e8d91f-cc17-4b2b-a543-43ca22e7c70f
	I0416 17:57:40.937516    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:40.937792    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:40.938405    6988 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I0416 17:57:40.938543    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:40.938543    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:40.938543    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:40.938543    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:40.938543    6988 round_trippers.go:580]     Content-Length: 291
	I0416 17:57:40.938662    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:41 GMT
	I0416 17:57:40.938662    6988 round_trippers.go:580]     Audit-Id: 9f1849c0-96cc-4587-8702-5be0aa8b035b
	I0416 17:57:40.938662    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:40.938662    6988 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"474cfa32-79eb-4bf1-81ff-b938f83eaa0d","resourceVersion":"383","creationTimestamp":"2024-04-16T17:57:27Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0416 17:57:40.939508    6988 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"474cfa32-79eb-4bf1-81ff-b938f83eaa0d","resourceVersion":"383","creationTimestamp":"2024-04-16T17:57:27Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0416 17:57:40.939654    6988 round_trippers.go:463] PUT https://172.19.91.227:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0416 17:57:40.939709    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:40.939709    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:40.939709    6988 round_trippers.go:473]     Content-Type: application/json
	I0416 17:57:40.939709    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:40.954484    6988 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0416 17:57:40.954484    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:40.954484    6988 round_trippers.go:580]     Content-Length: 291
	I0416 17:57:40.954484    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:41 GMT
	I0416 17:57:40.954484    6988 round_trippers.go:580]     Audit-Id: 33fbc171-b87c-4a8b-8b71-fb72b829abb0
	I0416 17:57:40.954484    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:40.954484    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:40.954484    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:40.954484    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:40.954484    6988 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"474cfa32-79eb-4bf1-81ff-b938f83eaa0d","resourceVersion":"385","creationTimestamp":"2024-04-16T17:57:27Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0416 17:57:41.416463    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:41.416653    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:41.416463    6988 round_trippers.go:463] GET https://172.19.91.227:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0416 17:57:41.416653    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:41.416653    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:41.416653    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:41.416739    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:41.416886    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:41.420106    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:41.420495    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:41.420495    6988 round_trippers.go:580]     Audit-Id: 0ef8009e-dcde-4e08-b2eb-b21c97c9713b
	I0416 17:57:41.420495    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:41.420495    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:41.420495    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:41.420495    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:41.420495    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:41 GMT
	I0416 17:57:41.420873    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:41.420873    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:41.420970    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:41.420970    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:41.420970    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:41.420970    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:41.420970    6988 round_trippers.go:580]     Content-Length: 291
	I0416 17:57:41.420970    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:41 GMT
	I0416 17:57:41.420970    6988 round_trippers.go:580]     Audit-Id: 876a0092-4e47-429b-acd8-759d477820ca
	I0416 17:57:41.421083    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:41.421155    6988 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"474cfa32-79eb-4bf1-81ff-b938f83eaa0d","resourceVersion":"395","creationTimestamp":"2024-04-16T17:57:27Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0416 17:57:41.421374    6988 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-945500" context rescaled to 1 replicas
	I0416 17:57:41.920343    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:41.920343    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:41.920343    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:41.920343    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:41.925445    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 17:57:41.925445    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:41.925445    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:42 GMT
	I0416 17:57:41.925445    6988 round_trippers.go:580]     Audit-Id: 7df7d5cd-8d90-47e3-a620-e333515b8855
	I0416 17:57:41.925445    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:41.925445    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:41.925445    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:41.925445    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:41.927690    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:42.389093    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:57:42.389178    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:42.389320    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:57:42.390035    6988 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:57:42.389320    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:42.390775    6988 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 17:57:42.390775    6988 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 17:57:42.390840    6988 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 17:57:42.390906    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:57:42.391435    6988 kapi.go:59] client config for multinode-945500: &rest.Config{Host:"https://172.19.91.227:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef16c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 17:57:42.392060    6988 addons.go:234] Setting addon default-storageclass=true in "multinode-945500"
	I0416 17:57:42.392151    6988 host.go:66] Checking if "multinode-945500" exists ...
	I0416 17:57:42.393041    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:57:42.412561    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:42.412743    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:42.412743    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:42.412743    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:42.419056    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 17:57:42.419366    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:42.419366    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:42.419366    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:42.419366    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:42.419366    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:42.419366    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:42 GMT
	I0416 17:57:42.419366    6988 round_trippers.go:580]     Audit-Id: b3f3bd38-d9b8-462a-9951-d6845f4c1e8b
	I0416 17:57:42.419606    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:42.919136    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:42.919136    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:42.919136    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:42.919136    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:42.922770    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:42.923481    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:42.923481    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:43 GMT
	I0416 17:57:42.923481    6988 round_trippers.go:580]     Audit-Id: 0619e710-cc23-453b-93b8-902006c18fd4
	I0416 17:57:42.923481    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:42.923481    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:42.923481    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:42.923481    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:42.924373    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:42.924671    6988 node_ready.go:53] node "multinode-945500" has status "Ready":"False"
	I0416 17:57:43.422289    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:43.422289    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:43.422289    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:43.422289    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:43.426297    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:43.426759    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:43.426855    6988 round_trippers.go:580]     Audit-Id: 3881c6f2-0168-43dd-afc5-e5828acf3c8d
	I0416 17:57:43.426855    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:43.426855    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:43.426855    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:43.426936    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:43.426936    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:43 GMT
	I0416 17:57:43.427005    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:43.912103    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:43.912103    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:43.912103    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:43.912103    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:43.915707    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:43.916753    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:43.916753    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:44 GMT
	I0416 17:57:43.916753    6988 round_trippers.go:580]     Audit-Id: 5c816ab6-0256-4da7-8677-2eed63915566
	I0416 17:57:43.916782    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:43.916782    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:43.916782    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:43.916782    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:43.917611    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:44.422232    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:44.422232    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:44.422232    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:44.422232    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:44.425983    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:44.426131    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:44.426131    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:44.426131    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:44 GMT
	I0416 17:57:44.426209    6988 round_trippers.go:580]     Audit-Id: 9338168a-3808-4f3d-8a58-744d48096dc5
	I0416 17:57:44.426209    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:44.426209    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:44.426209    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:44.426209    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:44.514747    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:57:44.514747    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:44.515754    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:57:44.517753    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:57:44.517753    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:44.517753    6988 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 17:57:44.517753    6988 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 17:57:44.517753    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:57:44.911211    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:44.911456    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:44.911456    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:44.911456    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:44.915270    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:44.915270    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:44.915270    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:44.915270    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:44.915270    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:44.915270    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:44.915270    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:45 GMT
	I0416 17:57:44.915270    6988 round_trippers.go:580]     Audit-Id: 4c85a024-69e3-42e3-8a96-0b4369f957e4
	I0416 17:57:44.916208    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:45.417189    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:45.417189    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:45.417189    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:45.417189    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:45.424768    6988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0416 17:57:45.424768    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:45.424768    6988 round_trippers.go:580]     Audit-Id: 0310038d-76b3-4992-9ac3-7533f23a7d71
	I0416 17:57:45.424768    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:45.424768    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:45.424768    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:45.424768    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:45.424768    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:45 GMT
	I0416 17:57:45.425371    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:45.425371    6988 node_ready.go:53] node "multinode-945500" has status "Ready":"False"
	I0416 17:57:45.923330    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:45.923330    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:45.923330    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:45.923330    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:45.925920    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:45.925920    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:45.926718    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:45.926718    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:45.926718    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:46 GMT
	I0416 17:57:45.926718    6988 round_trippers.go:580]     Audit-Id: 97c2ee9c-f0ff-43e0-b2a8-48327b90a95f
	I0416 17:57:45.926718    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:45.926718    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:45.927203    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:46.418033    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:46.418033    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:46.418033    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:46.418033    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:46.501786    6988 round_trippers.go:574] Response Status: 200 OK in 83 milliseconds
	I0416 17:57:46.501786    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:46.501786    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:46 GMT
	I0416 17:57:46.501786    6988 round_trippers.go:580]     Audit-Id: 7df6f9f0-10ff-4db8-bfad-3fc7f1364386
	I0416 17:57:46.501786    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:46.501786    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:46.501905    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:46.501905    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:46.503216    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:46.635075    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:57:46.635075    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:46.635935    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:57:46.921581    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:46.921653    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:46.921653    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:46.921720    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:46.924533    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:46.924533    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:46.924758    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:46.924758    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:46.924758    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:46.924758    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:46.924758    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:47 GMT
	I0416 17:57:46.924758    6988 round_trippers.go:580]     Audit-Id: e78831c8-f850-4752-a899-e59b21c78198
	I0416 17:57:46.924832    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:46.982609    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:57:46.982609    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:46.982609    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 17:57:47.140657    6988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 17:57:47.423704    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:47.423704    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:47.423704    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:47.423704    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:47.427881    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:47.428047    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:47.428047    6988 round_trippers.go:580]     Audit-Id: 23292552-c2df-4084-b58f-d36e231163f8
	I0416 17:57:47.428047    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:47.428047    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:47.428047    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:47.428047    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:47.428047    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:47 GMT
	I0416 17:57:47.428436    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:47.428909    6988 node_ready.go:53] node "multinode-945500" has status "Ready":"False"
	I0416 17:57:47.642156    6988 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0416 17:57:47.642156    6988 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0416 17:57:47.642263    6988 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0416 17:57:47.642263    6988 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0416 17:57:47.642263    6988 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0416 17:57:47.642352    6988 command_runner.go:130] > pod/storage-provisioner created
	I0416 17:57:47.915174    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:47.915174    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:47.915174    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:47.915174    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:47.919802    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:47.919802    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:47.919802    6988 round_trippers.go:580]     Audit-Id: 695031a3-c73c-4762-a80a-ead4be6d3a90
	I0416 17:57:47.919802    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:47.919802    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:47.919802    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:47.919802    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:47.919802    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:48 GMT
	I0416 17:57:47.921798    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:48.424055    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:48.424122    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:48.424122    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:48.424122    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:48.427517    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:48.427517    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:48.427517    6988 round_trippers.go:580]     Audit-Id: 7545d9c7-2c95-4fab-863b-976fb672f07e
	I0416 17:57:48.427517    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:48.427517    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:48.427517    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:48.427517    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:48.427517    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:48 GMT
	I0416 17:57:48.428336    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:48.912182    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:48.912285    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:48.912285    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:48.912285    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:48.915718    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:48.915718    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:48.915718    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:48.915718    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:48.915718    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:49 GMT
	I0416 17:57:48.915718    6988 round_trippers.go:580]     Audit-Id: 2263b32c-d20d-46cd-879e-9105b86a7194
	I0416 17:57:48.915718    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:48.915718    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:48.916253    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:49.012275    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:57:49.012444    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:49.012783    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 17:57:49.142232    6988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 17:57:49.275828    6988 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0416 17:57:49.276194    6988 round_trippers.go:463] GET https://172.19.91.227:8443/apis/storage.k8s.io/v1/storageclasses
	I0416 17:57:49.276271    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:49.276271    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:49.276381    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:49.279132    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:49.279132    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:49.279132    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:49.279132    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:49.279132    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:49.279132    6988 round_trippers.go:580]     Content-Length: 1273
	I0416 17:57:49.279132    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:49 GMT
	I0416 17:57:49.279397    6988 round_trippers.go:580]     Audit-Id: b06ff280-6eac-43c1-91fe-e3ebbad21f66
	I0416 17:57:49.279397    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:49.279397    6988 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"standard","uid":"fad243f1-4905-48ae-985d-d89cda0607a0","resourceVersion":"419","creationTimestamp":"2024-04-16T17:57:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-16T17:57:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0416 17:57:49.279545    6988 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"fad243f1-4905-48ae-985d-d89cda0607a0","resourceVersion":"419","creationTimestamp":"2024-04-16T17:57:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-16T17:57:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0416 17:57:49.279545    6988 round_trippers.go:463] PUT https://172.19.91.227:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0416 17:57:49.280079    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:49.280079    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:49.280079    6988 round_trippers.go:473]     Content-Type: application/json
	I0416 17:57:49.280122    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:49.283131    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:49.283131    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:49.283131    6988 round_trippers.go:580]     Audit-Id: 58e327bf-d681-4c51-8630-376535cfdae0
	I0416 17:57:49.283131    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:49.283131    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:49.283131    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:49.283131    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:49.283131    6988 round_trippers.go:580]     Content-Length: 1220
	I0416 17:57:49.283131    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:49 GMT
	I0416 17:57:49.283131    6988 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"fad243f1-4905-48ae-985d-d89cda0607a0","resourceVersion":"419","creationTimestamp":"2024-04-16T17:57:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-16T17:57:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0416 17:57:49.284142    6988 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0416 17:57:49.285110    6988 addons.go:505] duration metric: took 8.9631309s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0416 17:57:49.413824    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:49.413824    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:49.413824    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:49.413824    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:49.420066    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 17:57:49.420066    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:49.420066    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:49 GMT
	I0416 17:57:49.420066    6988 round_trippers.go:580]     Audit-Id: 673fcfb7-e79c-42ba-abaf-e828c3df7a7a
	I0416 17:57:49.420066    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:49.420066    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:49.420066    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:49.420066    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:49.420066    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:49.915557    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:49.915632    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:49.915632    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:49.915632    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:49.920023    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:49.920023    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:49.920023    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:49.920023    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:49.920023    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:49.920023    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:50 GMT
	I0416 17:57:49.920023    6988 round_trippers.go:580]     Audit-Id: cb813c2c-6bb9-41d0-a192-81d5df39cc31
	I0416 17:57:49.920023    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:49.920752    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:49.920881    6988 node_ready.go:53] node "multinode-945500" has status "Ready":"False"
	I0416 17:57:50.414309    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:50.414309    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:50.414309    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:50.414309    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:50.421246    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 17:57:50.421246    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:50.421246    6988 round_trippers.go:580]     Audit-Id: 9a47d54e-a489-4e7c-8e6e-1768c6e24a06
	I0416 17:57:50.421246    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:50.421246    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:50.421246    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:50.421246    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:50.421246    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:50 GMT
	I0416 17:57:50.421586    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:50.422041    6988 node_ready.go:49] node "multinode-945500" has status "Ready":"True"
	I0416 17:57:50.422127    6988 node_ready.go:38] duration metric: took 9.5128501s for node "multinode-945500" to be "Ready" ...
	I0416 17:57:50.422127    6988 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 17:57:50.422288    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods
	I0416 17:57:50.422288    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:50.422288    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:50.422352    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:50.426293    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:50.426293    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:50.426293    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:50 GMT
	I0416 17:57:50.426293    6988 round_trippers.go:580]     Audit-Id: 13196519-ea29-4856-beaa-5c943f886806
	I0416 17:57:50.426293    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:50.426293    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:50.426645    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:50.426645    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:50.427551    6988 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"427","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56336 chars]
	I0416 17:57:50.432315    6988 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-86z7h" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:50.432315    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 17:57:50.432315    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:50.432315    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:50.432315    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:50.435446    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:50.435446    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:50.435446    6988 round_trippers.go:580]     Audit-Id: 0da838d3-4490-46a7-8d52-0929abb29d06
	I0416 17:57:50.435446    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:50.435446    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:50.435446    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:50.435446    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:50.435446    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:50 GMT
	I0416 17:57:50.435667    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"427","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0416 17:57:50.436341    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:50.436417    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:50.436417    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:50.436417    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:50.441670    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 17:57:50.441670    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:50.441670    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:50 GMT
	I0416 17:57:50.441670    6988 round_trippers.go:580]     Audit-Id: 7f63ee25-4ff7-418f-b7b2-b71003d58b29
	I0416 17:57:50.441670    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:50.441670    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:50.441670    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:50.441670    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:50.441670    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:50.933620    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 17:57:50.933620    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:50.933620    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:50.933620    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:50.936638    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:50.936638    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:50.937401    6988 round_trippers.go:580]     Audit-Id: 61428305-720d-4f2d-9189-d4c9892ef7e3
	I0416 17:57:50.937401    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:50.937401    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:50.937401    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:50.937401    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:50.937401    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:51 GMT
	I0416 17:57:50.937680    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"427","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0416 17:57:50.938372    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:50.938438    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:50.938438    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:50.938438    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:50.940646    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:50.940646    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:50.940646    6988 round_trippers.go:580]     Audit-Id: 62d4cd2d-a2dc-447d-8fe8-0ab2e8469374
	I0416 17:57:50.940646    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:50.940646    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:50.940646    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:50.940646    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:50.940646    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:51 GMT
	I0416 17:57:50.941893    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:51.436888    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 17:57:51.436973    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:51.437057    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:51.437057    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:51.440468    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:51.440468    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:51.440468    6988 round_trippers.go:580]     Audit-Id: 854d513c-8ed8-40d2-a6f4-c3ce631c5044
	I0416 17:57:51.440468    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:51.440468    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:51.440468    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:51.440468    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:51.440468    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:51 GMT
	I0416 17:57:51.441473    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"427","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0416 17:57:51.442446    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:51.442513    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:51.442513    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:51.442513    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:51.448074    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 17:57:51.448074    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:51.448074    6988 round_trippers.go:580]     Audit-Id: ea821fd7-5bb9-4fc8-adab-1d7de329d33c
	I0416 17:57:51.448074    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:51.448074    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:51.448074    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:51.448074    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:51.448074    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:51 GMT
	I0416 17:57:51.448761    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:51.936346    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 17:57:51.936438    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:51.936438    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:51.936438    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:51.940774    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:51.940774    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:51.940774    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:51.940774    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:51.940774    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:51.940774    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:51.940774    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:51.940774    6988 round_trippers.go:580]     Audit-Id: 39edef38-eddb-4269-abe8-a908e1d21987
	I0416 17:57:51.941262    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"427","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0416 17:57:51.941999    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:51.942068    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:51.942068    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:51.942068    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:51.944728    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:51.944728    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:51.944728    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:51.944728    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:51.945637    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:51.945637    6988 round_trippers.go:580]     Audit-Id: e9f648f9-92bc-4242-8c2c-17b661038154
	I0416 17:57:51.945637    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:51.945637    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:51.945961    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:52.434152    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 17:57:52.434152    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.434152    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.434152    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.438737    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:52.438737    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.438905    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.438905    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.438905    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.438905    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.438905    6988 round_trippers.go:580]     Audit-Id: 64fc4c09-2c08-4c20-886d-b65cc89badc2
	I0416 17:57:52.438905    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.439311    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"441","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0416 17:57:52.440372    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:52.440372    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.440471    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.440471    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.442800    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:52.442800    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.442800    6988 round_trippers.go:580]     Audit-Id: 69a074dd-0323-4dfd-a4d9-2a31cf93ae57
	I0416 17:57:52.442800    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.442800    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.442800    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.442800    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.442800    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.443974    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:52.444376    6988 pod_ready.go:92] pod "coredns-76f75df574-86z7h" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:52.444463    6988 pod_ready.go:81] duration metric: took 2.0119463s for pod "coredns-76f75df574-86z7h" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.444463    6988 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.444559    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-945500
	I0416 17:57:52.444559    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.444559    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.444559    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.448264    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:52.448675    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.448709    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.448709    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.448709    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.448709    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.448709    6988 round_trippers.go:580]     Audit-Id: 6a1f3697-4191-47e0-93ea-8556479112b5
	I0416 17:57:52.448709    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.448895    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-945500","namespace":"kube-system","uid":"245cef70-3506-471b-9bf6-dd14a2c23d8c","resourceVersion":"372","creationTimestamp":"2024-04-16T17:57:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.91.227:2379","kubernetes.io/config.hash":"c735a32dacf9631b2b4787fe99cff316","kubernetes.io/config.mirror":"c735a32dacf9631b2b4787fe99cff316","kubernetes.io/config.seen":"2024-04-16T17:57:28.101466445Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0416 17:57:52.449544    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:52.449618    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.449618    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.449618    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.457774    6988 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0416 17:57:52.457774    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.457774    6988 round_trippers.go:580]     Audit-Id: 6aa9935f-5cde-4c2d-90c1-770e6d9b42ec
	I0416 17:57:52.457774    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.457774    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.457774    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.457774    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.457774    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.457774    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:52.457774    6988 pod_ready.go:92] pod "etcd-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:52.457774    6988 pod_ready.go:81] duration metric: took 13.3102ms for pod "etcd-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.458783    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.458817    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-945500
	I0416 17:57:52.458817    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.458817    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.458817    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.462379    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:52.462379    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.462379    6988 round_trippers.go:580]     Audit-Id: 3d6fa3f7-ff7f-4322-a2e8-b5a0c4fb1daf
	I0416 17:57:52.462379    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.462379    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.462379    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.462379    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.462379    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.462379    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-945500","namespace":"kube-system","uid":"c6ae969a-de5d-4c7e-af09-b1a5eb21f2ab","resourceVersion":"314","creationTimestamp":"2024-04-16T17:57:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.91.227:8443","kubernetes.io/config.hash":"564fae5a3e5851c815d6092b123a5395","kubernetes.io/config.mirror":"564fae5a3e5851c815d6092b123a5395","kubernetes.io/config.seen":"2024-04-16T17:57:28.101471746Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0416 17:57:52.464244    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:52.464374    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.464374    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.464374    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.466690    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:52.466690    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.466690    6988 round_trippers.go:580]     Audit-Id: d3396616-a825-4d83-94f7-1691134d1559
	I0416 17:57:52.466690    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.466690    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.466690    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.466690    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.466690    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.467128    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:52.467128    6988 pod_ready.go:92] pod "kube-apiserver-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:52.467128    6988 pod_ready.go:81] duration metric: took 8.3444ms for pod "kube-apiserver-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.467128    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.467128    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-945500
	I0416 17:57:52.467655    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.467655    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.467655    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.469965    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:52.469965    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.469965    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.469965    6988 round_trippers.go:580]     Audit-Id: 69b40722-0130-4c39-98a1-4a3e7990d75a
	I0416 17:57:52.469965    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.469965    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.469965    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.469965    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.469965    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-945500","namespace":"kube-system","uid":"01b937c2-9827-4240-83f0-3536fec5eb5e","resourceVersion":"345","creationTimestamp":"2024-04-16T17:57:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5db71de2029227779432bddd337fc81d","kubernetes.io/config.mirror":"5db71de2029227779432bddd337fc81d","kubernetes.io/config.seen":"2024-04-16T17:57:28.101473146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0416 17:57:52.471692    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:52.471736    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.471736    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.471736    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.474312    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:52.474312    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.474312    6988 round_trippers.go:580]     Audit-Id: ef6911fd-c5b9-4c1a-85d8-6d4810547589
	I0416 17:57:52.474312    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.474312    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.474312    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.474312    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.474312    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.474842    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:52.475259    6988 pod_ready.go:92] pod "kube-controller-manager-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:52.475298    6988 pod_ready.go:81] duration metric: took 8.1314ms for pod "kube-controller-manager-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.475298    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rfxsg" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.475372    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfxsg
	I0416 17:57:52.475407    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.475446    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.475446    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.480328    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:52.480328    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.480328    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.480328    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.480328    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.480328    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.480328    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.480328    6988 round_trippers.go:580]     Audit-Id: 5505b192-812e-4b7d-b573-cc48b255735a
	I0416 17:57:52.480328    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rfxsg","generateName":"kube-proxy-","namespace":"kube-system","uid":"b740e6e0-4768-4dd4-a958-307662a92578","resourceVersion":"401","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"83f1bde2-7175-4a0f-944e-61200d7e7177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83f1bde2-7175-4a0f-944e-61200d7e7177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0416 17:57:52.480969    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:52.480969    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.480969    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.480969    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.484123    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:52.484123    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.484123    6988 round_trippers.go:580]     Audit-Id: 242d2743-3177-42b4-9e74-5bce35db3f1d
	I0416 17:57:52.484123    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.484123    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.484123    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.484123    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.484123    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.484955    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:52.485557    6988 pod_ready.go:92] pod "kube-proxy-rfxsg" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:52.485602    6988 pod_ready.go:81] duration metric: took 10.2584ms for pod "kube-proxy-rfxsg" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.485602    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.638123    6988 request.go:629] Waited for 152.4159ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-945500
	I0416 17:57:52.638123    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-945500
	I0416 17:57:52.638123    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.638123    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.638123    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.642880    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:52.642880    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.642880    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.642880    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.642880    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.642880    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.642880    6988 round_trippers.go:580]     Audit-Id: 8f2e930a-7531-48ab-83eb-71103cec3dde
	I0416 17:57:52.642880    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.642880    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-945500","namespace":"kube-system","uid":"a09e52e8-1ac2-4c22-8a3d-57969fae85a9","resourceVersion":"310","creationTimestamp":"2024-04-16T17:57:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4ebc73a23d79d1dece7469fd94c931d1","kubernetes.io/config.mirror":"4ebc73a23d79d1dece7469fd94c931d1","kubernetes.io/config.seen":"2024-04-16T17:57:20.694761708Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0416 17:57:52.840231    6988 request.go:629] Waited for 196.2271ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:52.840540    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:52.840540    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.840640    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.840640    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.845870    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 17:57:52.845870    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.845870    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.845870    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.845870    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.845870    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.845870    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:53 GMT
	I0416 17:57:52.845870    6988 round_trippers.go:580]     Audit-Id: 05acaca5-b7c1-4fab-9ace-d775a055e4f5
	I0416 17:57:52.846425    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:52.846879    6988 pod_ready.go:92] pod "kube-scheduler-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:52.846957    6988 pod_ready.go:81] duration metric: took 361.3343ms for pod "kube-scheduler-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.846957    6988 pod_ready.go:38] duration metric: took 2.4246918s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 17:57:52.846957    6988 api_server.go:52] waiting for apiserver process to appear ...
	I0416 17:57:52.859063    6988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:57:52.885312    6988 command_runner.go:130] > 2058
	I0416 17:57:52.885400    6988 api_server.go:72] duration metric: took 12.562985s to wait for apiserver process to appear ...
	I0416 17:57:52.885400    6988 api_server.go:88] waiting for apiserver healthz status ...
	I0416 17:57:52.885400    6988 api_server.go:253] Checking apiserver healthz at https://172.19.91.227:8443/healthz ...
	I0416 17:57:52.898178    6988 api_server.go:279] https://172.19.91.227:8443/healthz returned 200:
	ok
	I0416 17:57:52.898356    6988 round_trippers.go:463] GET https://172.19.91.227:8443/version
	I0416 17:57:52.898430    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.898430    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.898463    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.900671    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:52.900731    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.900731    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.900731    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.900731    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.900731    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.900731    6988 round_trippers.go:580]     Content-Length: 263
	I0416 17:57:52.900731    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:53 GMT
	I0416 17:57:52.900731    6988 round_trippers.go:580]     Audit-Id: 23327aeb-4415-44a9-ac4c-ac1fb850d1c4
	I0416 17:57:52.900731    6988 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "29",
	  "gitVersion": "v1.29.3",
	  "gitCommit": "6813625b7cd706db5bc7388921be03071e1a492d",
	  "gitTreeState": "clean",
	  "buildDate": "2024-03-14T23:58:36Z",
	  "goVersion": "go1.21.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0416 17:57:52.900731    6988 api_server.go:141] control plane version: v1.29.3
	I0416 17:57:52.900731    6988 api_server.go:131] duration metric: took 15.3302ms to wait for apiserver health ...
	I0416 17:57:52.900731    6988 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 17:57:53.042203    6988 request.go:629] Waited for 141.464ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods
	I0416 17:57:53.042203    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods
	I0416 17:57:53.042203    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:53.042203    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:53.042203    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:53.047811    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:53.047811    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:53.047931    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:53.047931    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:53.047931    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:53.047931    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:53.047931    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:53 GMT
	I0416 17:57:53.047931    6988 round_trippers.go:580]     Audit-Id: 0112d2ef-1059-4960-9329-11966d09c0ed
	I0416 17:57:53.050025    6988 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"445"},"items":[{"metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"441","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56450 chars]
	I0416 17:57:53.056232    6988 system_pods.go:59] 8 kube-system pods found
	I0416 17:57:53.056303    6988 system_pods.go:61] "coredns-76f75df574-86z7h" [1ca004a0-0575-4576-a5ed-ba0891a7d277] Running
	I0416 17:57:53.056303    6988 system_pods.go:61] "etcd-multinode-945500" [245cef70-3506-471b-9bf6-dd14a2c23d8c] Running
	I0416 17:57:53.056303    6988 system_pods.go:61] "kindnet-tp7jl" [91595b62-10ee-47cb-a0c9-2ca83ad70af7] Running
	I0416 17:57:53.056303    6988 system_pods.go:61] "kube-apiserver-multinode-945500" [c6ae969a-de5d-4c7e-af09-b1a5eb21f2ab] Running
	I0416 17:57:53.056303    6988 system_pods.go:61] "kube-controller-manager-multinode-945500" [01b937c2-9827-4240-83f0-3536fec5eb5e] Running
	I0416 17:57:53.056303    6988 system_pods.go:61] "kube-proxy-rfxsg" [b740e6e0-4768-4dd4-a958-307662a92578] Running
	I0416 17:57:53.056303    6988 system_pods.go:61] "kube-scheduler-multinode-945500" [a09e52e8-1ac2-4c22-8a3d-57969fae85a9] Running
	I0416 17:57:53.056303    6988 system_pods.go:61] "storage-provisioner" [3bd5cc95-eef6-473e-b6f9-898568046f1b] Running
	I0416 17:57:53.056378    6988 system_pods.go:74] duration metric: took 155.5639ms to wait for pod list to return data ...
	I0416 17:57:53.056378    6988 default_sa.go:34] waiting for default service account to be created ...
	I0416 17:57:53.242714    6988 request.go:629] Waited for 186.2414ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/namespaces/default/serviceaccounts
	I0416 17:57:53.242956    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/default/serviceaccounts
	I0416 17:57:53.242956    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:53.243091    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:53.243091    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:53.246460    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:53.246460    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:53.246962    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:53.246962    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:53.246962    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:53.246962    6988 round_trippers.go:580]     Content-Length: 261
	I0416 17:57:53.246962    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:53 GMT
	I0416 17:57:53.246962    6988 round_trippers.go:580]     Audit-Id: da3e035a-782e-4d26-b641-e9ec06113208
	I0416 17:57:53.246962    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:53.247049    6988 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"26260d2a-9800-4f2e-87ba-a34049d52e3f","resourceVersion":"332","creationTimestamp":"2024-04-16T17:57:40Z"}}]}
	I0416 17:57:53.247481    6988 default_sa.go:45] found service account: "default"
	I0416 17:57:53.247563    6988 default_sa.go:55] duration metric: took 191.174ms for default service account to be created ...
	I0416 17:57:53.247563    6988 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 17:57:53.445373    6988 request.go:629] Waited for 197.6083ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods
	I0416 17:57:53.445373    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods
	I0416 17:57:53.445373    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:53.445373    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:53.445373    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:53.453613    6988 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0416 17:57:53.453613    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:53.453613    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:53.453613    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:53.453613    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:53.453613    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:53.453613    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:53 GMT
	I0416 17:57:53.453613    6988 round_trippers.go:580]     Audit-Id: a54cbc48-ccbf-4ab0-b75f-121f6c3ab39c
	I0416 17:57:53.454598    6988 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"441","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56450 chars]
	I0416 17:57:53.457215    6988 system_pods.go:86] 8 kube-system pods found
	I0416 17:57:53.457215    6988 system_pods.go:89] "coredns-76f75df574-86z7h" [1ca004a0-0575-4576-a5ed-ba0891a7d277] Running
	I0416 17:57:53.457215    6988 system_pods.go:89] "etcd-multinode-945500" [245cef70-3506-471b-9bf6-dd14a2c23d8c] Running
	I0416 17:57:53.457215    6988 system_pods.go:89] "kindnet-tp7jl" [91595b62-10ee-47cb-a0c9-2ca83ad70af7] Running
	I0416 17:57:53.457215    6988 system_pods.go:89] "kube-apiserver-multinode-945500" [c6ae969a-de5d-4c7e-af09-b1a5eb21f2ab] Running
	I0416 17:57:53.457215    6988 system_pods.go:89] "kube-controller-manager-multinode-945500" [01b937c2-9827-4240-83f0-3536fec5eb5e] Running
	I0416 17:57:53.457215    6988 system_pods.go:89] "kube-proxy-rfxsg" [b740e6e0-4768-4dd4-a958-307662a92578] Running
	I0416 17:57:53.457215    6988 system_pods.go:89] "kube-scheduler-multinode-945500" [a09e52e8-1ac2-4c22-8a3d-57969fae85a9] Running
	I0416 17:57:53.457215    6988 system_pods.go:89] "storage-provisioner" [3bd5cc95-eef6-473e-b6f9-898568046f1b] Running
	I0416 17:57:53.457215    6988 system_pods.go:126] duration metric: took 209.6402ms to wait for k8s-apps to be running ...
	I0416 17:57:53.457215    6988 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 17:57:53.465993    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 17:57:53.490843    6988 system_svc.go:56] duration metric: took 32.799ms WaitForService to wait for kubelet
	I0416 17:57:53.490843    6988 kubeadm.go:576] duration metric: took 13.1684808s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 17:57:53.490945    6988 node_conditions.go:102] verifying NodePressure condition ...
	I0416 17:57:53.646796    6988 request.go:629] Waited for 155.5885ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/nodes
	I0416 17:57:53.647092    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes
	I0416 17:57:53.647092    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:53.647092    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:53.647092    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:53.650750    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:53.650750    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:53.650750    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:53.650750    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:53.650750    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:53 GMT
	I0416 17:57:53.651249    6988 round_trippers.go:580]     Audit-Id: a39fa908-8f98-49bc-a6db-1564faa14911
	I0416 17:57:53.651249    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:53.651249    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:53.651424    6988 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"mana
gedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1" [truncated 4843 chars]
	I0416 17:57:53.651922    6988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 17:57:53.651922    6988 node_conditions.go:123] node cpu capacity is 2
	I0416 17:57:53.651922    6988 node_conditions.go:105] duration metric: took 160.9684ms to run NodePressure ...
	I0416 17:57:53.652035    6988 start.go:240] waiting for startup goroutines ...
	I0416 17:57:53.652035    6988 start.go:245] waiting for cluster config update ...
	I0416 17:57:53.652035    6988 start.go:254] writing updated cluster config ...
	I0416 17:57:53.653564    6988 out.go:177] 
	I0416 17:57:53.669380    6988 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 17:57:53.669380    6988 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 17:57:53.672905    6988 out.go:177] * Starting "multinode-945500-m02" worker node in "multinode-945500" cluster
	I0416 17:57:53.673088    6988 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 17:57:53.673617    6988 cache.go:56] Caching tarball of preloaded images
	I0416 17:57:53.673750    6988 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 17:57:53.673750    6988 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 17:57:53.674279    6988 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 17:57:53.682401    6988 start.go:360] acquireMachinesLock for multinode-945500-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 17:57:53.682401    6988 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-945500-m02"
	I0416 17:57:53.682989    6988 start.go:93] Provisioning new machine with config: &{Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.91.227 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0416 17:57:53.682989    6988 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0416 17:57:53.683581    6988 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 17:57:53.683581    6988 start.go:159] libmachine.API.Create for "multinode-945500" (driver="hyperv")
	I0416 17:57:53.683581    6988 client.go:168] LocalClient.Create starting
	I0416 17:57:53.684171    6988 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0416 17:57:53.684171    6988 main.go:141] libmachine: Decoding PEM data...
	I0416 17:57:53.684730    6988 main.go:141] libmachine: Parsing certificate...
	I0416 17:57:53.684763    6988 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0416 17:57:53.684763    6988 main.go:141] libmachine: Decoding PEM data...
	I0416 17:57:53.684763    6988 main.go:141] libmachine: Parsing certificate...
	I0416 17:57:53.684763    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0416 17:57:55.392368    6988 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0416 17:57:55.392368    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:55.393364    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0416 17:57:56.931487    6988 main.go:141] libmachine: [stdout =====>] : False
	
	I0416 17:57:56.931487    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:56.931487    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 17:57:58.272841    6988 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 17:57:58.273519    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:58.273519    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 17:58:01.537799    6988 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 17:58:01.537799    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:01.539609    6988 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 17:58:01.848885    6988 main.go:141] libmachine: Creating SSH key...
	I0416 17:58:02.010218    6988 main.go:141] libmachine: Creating VM...
	I0416 17:58:02.011217    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 17:58:04.625040    6988 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 17:58:04.625040    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:04.625917    6988 main.go:141] libmachine: Using switch "Default Switch"
	I0416 17:58:04.625917    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 17:58:06.258751    6988 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 17:58:06.258751    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:06.258751    6988 main.go:141] libmachine: Creating VHD
	I0416 17:58:06.258751    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0416 17:58:09.852420    6988 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : C09A8F8B-563A-41CF-AB1F-9B4C422F3FC9
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0416 17:58:09.852568    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:09.852568    6988 main.go:141] libmachine: Writing magic tar header
	I0416 17:58:09.852638    6988 main.go:141] libmachine: Writing SSH key tar header
	I0416 17:58:09.862039    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0416 17:58:12.878751    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:12.878751    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:12.878751    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\disk.vhd' -SizeBytes 20000MB
	I0416 17:58:15.237605    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:15.237605    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:15.237605    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-945500-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0416 17:58:18.410858    6988 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-945500-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0416 17:58:18.411873    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:18.411914    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-945500-m02 -DynamicMemoryEnabled $false
	I0416 17:58:20.486445    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:20.486524    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:20.486600    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-945500-m02 -Count 2
	I0416 17:58:22.474057    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:22.474057    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:22.474057    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-945500-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\boot2docker.iso'
	I0416 17:58:24.877959    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:24.877959    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:24.878134    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-945500-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\disk.vhd'
	I0416 17:58:27.308442    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:27.309253    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:27.309253    6988 main.go:141] libmachine: Starting VM...
	I0416 17:58:27.309346    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-945500-m02
	I0416 17:58:29.937973    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:29.937973    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:29.937973    6988 main.go:141] libmachine: Waiting for host to start...
	I0416 17:58:29.938140    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:58:32.040669    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:58:32.040669    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:32.040763    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:58:34.346849    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:34.346849    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:35.361237    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:58:37.380851    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:58:37.380851    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:37.381523    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:58:39.667097    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:39.667097    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:40.670143    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:58:42.688257    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:58:42.688257    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:42.688328    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:58:44.946196    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:44.946196    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:45.948919    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:58:47.976127    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:58:47.976127    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:47.976535    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:58:50.265300    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:50.265477    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:51.278063    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:58:53.353234    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:58:53.353234    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:53.353542    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:58:55.731097    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:58:55.731585    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:55.731648    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:58:57.706259    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:58:57.706259    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:57.706259    6988 machine.go:94] provisionDockerMachine start ...
	I0416 17:58:57.706337    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:58:59.674406    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:58:59.674406    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:59.675593    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:01.982982    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:01.982982    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:01.989231    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:59:02.000855    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.6 22 <nil> <nil>}
	I0416 17:59:02.000855    6988 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 17:59:02.131967    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 17:59:02.132116    6988 buildroot.go:166] provisioning hostname "multinode-945500-m02"
	I0416 17:59:02.132244    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:04.030355    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:04.031102    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:04.031102    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:06.380424    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:06.380424    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:06.385493    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:59:06.385574    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.6 22 <nil> <nil>}
	I0416 17:59:06.385574    6988 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-945500-m02 && echo "multinode-945500-m02" | sudo tee /etc/hostname
	I0416 17:59:06.536173    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-945500-m02
	
	I0416 17:59:06.536238    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:08.514008    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:08.514084    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:08.514108    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:10.867331    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:10.867331    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:10.872002    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:59:10.872167    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.6 22 <nil> <nil>}
	I0416 17:59:10.872167    6988 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-945500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-945500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-945500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 17:59:11.029689    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 17:59:11.029689    6988 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 17:59:11.029689    6988 buildroot.go:174] setting up certificates
	I0416 17:59:11.029689    6988 provision.go:84] configureAuth start
	I0416 17:59:11.029689    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:13.049800    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:13.050575    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:13.050646    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:15.359589    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:15.359589    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:15.359846    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:17.299020    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:17.299020    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:17.300075    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:19.605590    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:19.605590    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:19.605590    6988 provision.go:143] copyHostCerts
	I0416 17:59:19.605792    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 17:59:19.606057    6988 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 17:59:19.606057    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 17:59:19.606675    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 17:59:19.607815    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 17:59:19.608147    6988 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 17:59:19.608226    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 17:59:19.608494    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 17:59:19.609301    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 17:59:19.609365    6988 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 17:59:19.609365    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 17:59:19.609365    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 17:59:19.610613    6988 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-945500-m02 san=[127.0.0.1 172.19.91.6 localhost minikube multinode-945500-m02]
	I0416 17:59:19.702929    6988 provision.go:177] copyRemoteCerts
	I0416 17:59:19.710522    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 17:59:19.710522    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:21.626659    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:21.626659    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:21.627629    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:23.970899    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:23.970899    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:23.971221    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\id_rsa Username:docker}
	I0416 17:59:24.079459    6988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3686883s)
	I0416 17:59:24.079459    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 17:59:24.080474    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 17:59:24.123694    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 17:59:24.124179    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0416 17:59:24.164830    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 17:59:24.165649    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 17:59:24.208692    6988 provision.go:87] duration metric: took 13.1782183s to configureAuth
	I0416 17:59:24.208692    6988 buildroot.go:189] setting minikube options for container-runtime
	I0416 17:59:24.209067    6988 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 17:59:24.209160    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:26.153425    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:26.153425    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:26.153714    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:28.507518    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:28.507518    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:28.511037    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:59:28.511634    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.6 22 <nil> <nil>}
	I0416 17:59:28.511634    6988 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 17:59:28.639516    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 17:59:28.639516    6988 buildroot.go:70] root file system type: tmpfs
	I0416 17:59:28.639516    6988 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 17:59:28.639516    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:30.530854    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:30.531013    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:30.531013    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:32.826918    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:32.826918    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:32.832383    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:59:32.832984    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.6 22 <nil> <nil>}
	I0416 17:59:32.832984    6988 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.91.227"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 17:59:32.992600    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.91.227
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 17:59:32.992774    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:34.963694    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:34.963694    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:34.963799    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:37.247922    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:37.247922    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:37.252024    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:59:37.252024    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.6 22 <nil> <nil>}
	I0416 17:59:37.252024    6988 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 17:59:39.216273    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 17:59:39.216273    6988 machine.go:97] duration metric: took 41.5076568s to provisionDockerMachine
	I0416 17:59:39.216367    6988 client.go:171] duration metric: took 1m45.5267916s to LocalClient.Create
	I0416 17:59:39.216420    6988 start.go:167] duration metric: took 1m45.5268452s to libmachine.API.Create "multinode-945500"
	I0416 17:59:39.216420    6988 start.go:293] postStartSetup for "multinode-945500-m02" (driver="hyperv")
	I0416 17:59:39.216420    6988 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 17:59:39.225464    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 17:59:39.225464    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:41.131652    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:41.131652    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:41.132015    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:43.445904    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:43.445904    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:43.446473    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\id_rsa Username:docker}
	I0416 17:59:43.549649    6988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3239396s)
	I0416 17:59:43.558710    6988 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 17:59:43.563635    6988 command_runner.go:130] > NAME=Buildroot
	I0416 17:59:43.563635    6988 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0416 17:59:43.563635    6988 command_runner.go:130] > ID=buildroot
	I0416 17:59:43.563635    6988 command_runner.go:130] > VERSION_ID=2023.02.9
	I0416 17:59:43.563635    6988 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0416 17:59:43.563635    6988 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 17:59:43.563635    6988 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 17:59:43.565096    6988 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 17:59:43.566332    6988 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 17:59:43.566332    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 17:59:43.575822    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 17:59:43.593251    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 17:59:43.635050    6988 start.go:296] duration metric: took 4.4183786s for postStartSetup
	I0416 17:59:43.637173    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:45.591586    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:45.591586    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:45.591966    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:47.994749    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:47.994749    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:47.994889    6988 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 17:59:47.996574    6988 start.go:128] duration metric: took 1m54.3070064s to createHost
	I0416 17:59:47.996664    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:49.890109    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:49.890109    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:49.890628    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:52.220872    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:52.220872    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:52.225852    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:59:52.226248    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.6 22 <nil> <nil>}
	I0416 17:59:52.226248    6988 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 17:59:52.368040    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713290392.538512769
	
	I0416 17:59:52.368040    6988 fix.go:216] guest clock: 1713290392.538512769
	I0416 17:59:52.368040    6988 fix.go:229] Guest: 2024-04-16 17:59:52.538512769 +0000 UTC Remote: 2024-04-16 17:59:47.9965749 +0000 UTC m=+309.651339801 (delta=4.541937869s)
	I0416 17:59:52.368159    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:54.442418    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:54.442507    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:54.442581    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:56.760874    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:56.760874    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:56.765985    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:59:56.766627    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.6 22 <nil> <nil>}
	I0416 17:59:56.766627    6988 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713290392
	I0416 17:59:56.909969    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 17:59:52 UTC 2024
	
	I0416 17:59:56.909969    6988 fix.go:236] clock set: Tue Apr 16 17:59:52 UTC 2024
	 (err=<nil>)
	I0416 17:59:56.909969    6988 start.go:83] releasing machines lock for "multinode-945500-m02", held for 2m3.2205685s
	I0416 17:59:56.909969    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:58.843464    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:58.843464    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:58.843546    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:00:01.159738    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 18:00:01.160789    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:00:01.160917    6988 out.go:177] * Found network options:
	I0416 18:00:01.161771    6988 out.go:177]   - NO_PROXY=172.19.91.227
	W0416 18:00:01.162783    6988 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 18:00:01.163550    6988 out.go:177]   - NO_PROXY=172.19.91.227
	W0416 18:00:01.163820    6988 proxy.go:119] fail to check proxy env: Error ip not in block
	W0416 18:00:01.165081    6988 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 18:00:01.167381    6988 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 18:00:01.167483    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:00:01.178390    6988 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0416 18:00:01.178390    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:00:03.244075    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:00:03.244075    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:00:03.244075    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:00:03.244075    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:00:03.244356    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:00:03.244356    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:00:05.758057    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 18:00:05.758057    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:00:05.758057    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\id_rsa Username:docker}
	I0416 18:00:05.784117    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 18:00:05.784117    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:00:05.784117    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\id_rsa Username:docker}
	I0416 18:00:05.960484    6988 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0416 18:00:05.960638    6988 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.7929841s)
	I0416 18:00:05.960638    6988 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0416 18:00:05.960638    6988 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.781976s)
	W0416 18:00:05.960638    6988 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 18:00:05.975053    6988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 18:00:06.012668    6988 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0416 18:00:06.012756    6988 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 18:00:06.012756    6988 start.go:494] detecting cgroup driver to use...
	I0416 18:00:06.012756    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 18:00:06.050850    6988 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0416 18:00:06.061001    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 18:00:06.091844    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 18:00:06.110783    6988 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 18:00:06.118610    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 18:00:06.144577    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 18:00:06.171490    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 18:00:06.198550    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 18:00:06.226893    6988 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 18:00:06.255518    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 18:00:06.285057    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 18:00:06.314136    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 18:00:06.344453    6988 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 18:00:06.362440    6988 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0416 18:00:06.374326    6988 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 18:00:06.400901    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:00:06.587114    6988 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 18:00:06.621553    6988 start.go:494] detecting cgroup driver to use...
	I0416 18:00:06.630654    6988 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 18:00:06.656160    6988 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0416 18:00:06.656235    6988 command_runner.go:130] > [Unit]
	I0416 18:00:06.656235    6988 command_runner.go:130] > Description=Docker Application Container Engine
	I0416 18:00:06.656235    6988 command_runner.go:130] > Documentation=https://docs.docker.com
	I0416 18:00:06.656235    6988 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0416 18:00:06.656235    6988 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0416 18:00:06.656235    6988 command_runner.go:130] > StartLimitBurst=3
	I0416 18:00:06.656235    6988 command_runner.go:130] > StartLimitIntervalSec=60
	I0416 18:00:06.656235    6988 command_runner.go:130] > [Service]
	I0416 18:00:06.656235    6988 command_runner.go:130] > Type=notify
	I0416 18:00:06.656235    6988 command_runner.go:130] > Restart=on-failure
	I0416 18:00:06.656235    6988 command_runner.go:130] > Environment=NO_PROXY=172.19.91.227
	I0416 18:00:06.656235    6988 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0416 18:00:06.656235    6988 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0416 18:00:06.656235    6988 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0416 18:00:06.656235    6988 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0416 18:00:06.656235    6988 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0416 18:00:06.656235    6988 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0416 18:00:06.656235    6988 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0416 18:00:06.656235    6988 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0416 18:00:06.656235    6988 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0416 18:00:06.656235    6988 command_runner.go:130] > ExecStart=
	I0416 18:00:06.656778    6988 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0416 18:00:06.656778    6988 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0416 18:00:06.656820    6988 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0416 18:00:06.656870    6988 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0416 18:00:06.656870    6988 command_runner.go:130] > LimitNOFILE=infinity
	I0416 18:00:06.656911    6988 command_runner.go:130] > LimitNPROC=infinity
	I0416 18:00:06.656911    6988 command_runner.go:130] > LimitCORE=infinity
	I0416 18:00:06.656911    6988 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0416 18:00:06.656911    6988 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0416 18:00:06.656911    6988 command_runner.go:130] > TasksMax=infinity
	I0416 18:00:06.656911    6988 command_runner.go:130] > TimeoutStartSec=0
	I0416 18:00:06.656911    6988 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0416 18:00:06.656911    6988 command_runner.go:130] > Delegate=yes
	I0416 18:00:06.656911    6988 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0416 18:00:06.656911    6988 command_runner.go:130] > KillMode=process
	I0416 18:00:06.656911    6988 command_runner.go:130] > [Install]
	I0416 18:00:06.656911    6988 command_runner.go:130] > WantedBy=multi-user.target
	I0416 18:00:06.666231    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 18:00:06.697894    6988 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 18:00:06.737622    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 18:00:06.771467    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 18:00:06.804240    6988 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 18:00:06.854175    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 18:00:06.875932    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 18:00:06.907847    6988 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0416 18:00:06.916941    6988 ssh_runner.go:195] Run: which cri-dockerd
	I0416 18:00:06.922573    6988 command_runner.go:130] > /usr/bin/cri-dockerd
	I0416 18:00:06.930663    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 18:00:06.948367    6988 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 18:00:06.987048    6988 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 18:00:07.191969    6988 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 18:00:07.382844    6988 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 18:00:07.382971    6988 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 18:00:07.425295    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:00:07.611967    6988 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 18:00:10.072387    6988 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.460242s)
	I0416 18:00:10.082602    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0416 18:00:10.120067    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 18:00:10.155302    6988 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0416 18:00:10.359234    6988 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0416 18:00:10.554817    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:00:10.747932    6988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0416 18:00:10.786544    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 18:00:10.819302    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:00:10.999957    6988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0416 18:00:11.099015    6988 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0416 18:00:11.111636    6988 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0416 18:00:11.122504    6988 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0416 18:00:11.122504    6988 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0416 18:00:11.122504    6988 command_runner.go:130] > Device: 0,22	Inode: 871         Links: 1
	I0416 18:00:11.122504    6988 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0416 18:00:11.122504    6988 command_runner.go:130] > Access: 2024-04-16 18:00:11.194886190 +0000
	I0416 18:00:11.122504    6988 command_runner.go:130] > Modify: 2024-04-16 18:00:11.194886190 +0000
	I0416 18:00:11.122504    6988 command_runner.go:130] > Change: 2024-04-16 18:00:11.200886564 +0000
	I0416 18:00:11.122504    6988 command_runner.go:130] >  Birth: -
	I0416 18:00:11.122504    6988 start.go:562] Will wait 60s for crictl version
	I0416 18:00:11.131362    6988 ssh_runner.go:195] Run: which crictl
	I0416 18:00:11.136657    6988 command_runner.go:130] > /usr/bin/crictl
	I0416 18:00:11.146046    6988 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 18:00:11.199867    6988 command_runner.go:130] > Version:  0.1.0
	I0416 18:00:11.199867    6988 command_runner.go:130] > RuntimeName:  docker
	I0416 18:00:11.199867    6988 command_runner.go:130] > RuntimeVersion:  26.0.1
	I0416 18:00:11.199867    6988 command_runner.go:130] > RuntimeApiVersion:  v1
	I0416 18:00:11.199867    6988 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0416 18:00:11.205859    6988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 18:00:11.237864    6988 command_runner.go:130] > 26.0.1
	I0416 18:00:11.245954    6988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 18:00:11.279233    6988 command_runner.go:130] > 26.0.1
	I0416 18:00:11.280642    6988 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
	I0416 18:00:11.281457    6988 out.go:177]   - env NO_PROXY=172.19.91.227
	I0416 18:00:11.282089    6988 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0416 18:00:11.285919    6988 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0416 18:00:11.285919    6988 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0416 18:00:11.285919    6988 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0416 18:00:11.285919    6988 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:6f:a4 Flags:up|broadcast|multicast|running}
	I0416 18:00:11.289016    6988 ip.go:210] interface addr: fe80::6b96:eca7:5afa:def5/64
	I0416 18:00:11.289092    6988 ip.go:210] interface addr: 172.19.80.1/20
	I0416 18:00:11.297335    6988 ssh_runner.go:195] Run: grep 172.19.80.1	host.minikube.internal$ /etc/hosts
	I0416 18:00:11.303557    6988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 18:00:11.324932    6988 mustload.go:65] Loading cluster: multinode-945500
	I0416 18:00:11.324932    6988 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:00:11.326302    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:00:13.285643    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:00:13.285643    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:00:13.285643    6988 host.go:66] Checking if "multinode-945500" exists ...
	I0416 18:00:13.285961    6988 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500 for IP: 172.19.91.6
	I0416 18:00:13.285961    6988 certs.go:194] generating shared ca certs ...
	I0416 18:00:13.285961    6988 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:00:13.286821    6988 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0416 18:00:13.287059    6988 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0416 18:00:13.287230    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 18:00:13.287572    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0416 18:00:13.287754    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 18:00:13.287938    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 18:00:13.288586    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem (1338 bytes)
	W0416 18:00:13.288985    6988 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324_empty.pem, impossibly tiny 0 bytes
	I0416 18:00:13.289144    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0416 18:00:13.289487    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0416 18:00:13.289775    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0416 18:00:13.290139    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0416 18:00:13.290481    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem (1708 bytes)
	I0416 18:00:13.290481    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /usr/share/ca-certificates/53242.pem
	I0416 18:00:13.291100    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:00:13.291100    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem -> /usr/share/ca-certificates/5324.pem
	I0416 18:00:13.291100    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 18:00:13.340860    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 18:00:13.392323    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 18:00:13.436417    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 18:00:13.477907    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /usr/share/ca-certificates/53242.pem (1708 bytes)
	I0416 18:00:13.525089    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 18:00:13.566780    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem --> /usr/share/ca-certificates/5324.pem (1338 bytes)
	I0416 18:00:13.622111    6988 ssh_runner.go:195] Run: openssl version
	I0416 18:00:13.630969    6988 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0416 18:00:13.644134    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/53242.pem && ln -fs /usr/share/ca-certificates/53242.pem /etc/ssl/certs/53242.pem"
	I0416 18:00:13.673969    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/53242.pem
	I0416 18:00:13.680217    6988 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 16 16:35 /usr/share/ca-certificates/53242.pem
	I0416 18:00:13.680500    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:35 /usr/share/ca-certificates/53242.pem
	I0416 18:00:13.688237    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/53242.pem
	I0416 18:00:13.696922    6988 command_runner.go:130] > 3ec20f2e
	I0416 18:00:13.708831    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/53242.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 18:00:13.733581    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 18:00:13.760217    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:00:13.766741    6988 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 16 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:00:13.767776    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:00:13.776508    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:00:13.784406    6988 command_runner.go:130] > b5213941
	I0416 18:00:13.793775    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 18:00:13.827353    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5324.pem && ln -fs /usr/share/ca-certificates/5324.pem /etc/ssl/certs/5324.pem"
	I0416 18:00:13.855989    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5324.pem
	I0416 18:00:13.863594    6988 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 16 16:35 /usr/share/ca-certificates/5324.pem
	I0416 18:00:13.863671    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:35 /usr/share/ca-certificates/5324.pem
	I0416 18:00:13.872713    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5324.pem
	I0416 18:00:13.881385    6988 command_runner.go:130] > 51391683
	I0416 18:00:13.891867    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5324.pem /etc/ssl/certs/51391683.0"
	I0416 18:00:13.919310    6988 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 18:00:13.925213    6988 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 18:00:13.925213    6988 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 18:00:13.925406    6988 kubeadm.go:928] updating node {m02 172.19.91.6 8443 v1.29.3 docker false true} ...
	I0416 18:00:13.925406    6988 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-945500-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.91.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 18:00:13.933333    6988 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 18:00:13.949475    6988 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	I0416 18:00:13.949595    6988 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0416 18:00:13.961381    6988 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0416 18:00:13.978194    6988 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0416 18:00:13.978194    6988 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256
	I0416 18:00:13.978194    6988 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256
	I0416 18:00:13.978338    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0416 18:00:13.978338    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0416 18:00:13.989548    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 18:00:13.989548    6988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0416 18:00:13.997857    6988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0416 18:00:14.012312    6988 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0416 18:00:14.012312    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0416 18:00:14.012312    6988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0416 18:00:14.012312    6988 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0416 18:00:14.012312    6988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0416 18:00:14.012312    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0416 18:00:14.012312    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0416 18:00:14.024318    6988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0416 18:00:14.111282    6988 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0416 18:00:14.111282    6988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0416 18:00:14.111282    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0416 18:00:15.159706    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0416 18:00:15.176637    6988 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0416 18:00:15.206211    6988 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 18:00:15.245325    6988 ssh_runner.go:195] Run: grep 172.19.91.227	control-plane.minikube.internal$ /etc/hosts
	I0416 18:00:15.251624    6988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.91.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 18:00:15.280749    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:00:15.453073    6988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 18:00:15.479748    6988 host.go:66] Checking if "multinode-945500" exists ...
	I0416 18:00:15.480950    6988 start.go:316] joinCluster: &{Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.91.227 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.91.6 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpi
ration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 18:00:15.481069    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0416 18:00:15.481184    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:00:17.505631    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:00:17.505631    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:00:17.506531    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:00:19.802120    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 18:00:19.802120    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:00:19.802309    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 18:00:19.993353    6988 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token gegaat.x425l3cmfd8uouwr --discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c 
	I0416 18:00:19.993446    6988 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0": (4.5121206s)
	I0416 18:00:19.993446    6988 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.19.91.6 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0416 18:00:19.993532    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gegaat.x425l3cmfd8uouwr --discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-945500-m02"
	I0416 18:00:20.187968    6988 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 18:00:21.976702    6988 command_runner.go:130] > [preflight] Running pre-flight checks
	I0416 18:00:21.976807    6988 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0416 18:00:21.976807    6988 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0416 18:00:21.976877    6988 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 18:00:21.976877    6988 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 18:00:21.976877    6988 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0416 18:00:21.976877    6988 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0416 18:00:21.976877    6988 command_runner.go:130] > This node has joined the cluster:
	I0416 18:00:21.976877    6988 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0416 18:00:21.976946    6988 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0416 18:00:21.976946    6988 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0416 18:00:21.977006    6988 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gegaat.x425l3cmfd8uouwr --discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-945500-m02": (1.9833608s)
	I0416 18:00:21.977121    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0416 18:00:22.175327    6988 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0416 18:00:22.347211    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-945500-m02 minikube.k8s.io/updated_at=2024_04_16T18_00_22_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4 minikube.k8s.io/name=multinode-945500 minikube.k8s.io/primary=false
	I0416 18:00:22.461008    6988 command_runner.go:130] > node/multinode-945500-m02 labeled
	I0416 18:00:22.461089    6988 start.go:318] duration metric: took 6.9798519s to joinCluster
	I0416 18:00:22.461089    6988 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.19.91.6 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0416 18:00:22.462104    6988 out.go:177] * Verifying Kubernetes components...
	I0416 18:00:22.462104    6988 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:00:22.473344    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:00:22.642951    6988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 18:00:22.666251    6988 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 18:00:22.666816    6988 kapi.go:59] client config for multinode-945500: &rest.Config{Host:"https://172.19.91.227:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef16c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 18:00:22.667170    6988 node_ready.go:35] waiting up to 6m0s for node "multinode-945500-m02" to be "Ready" ...
	I0416 18:00:22.667170    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:22.667170    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:22.667170    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:22.667170    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:22.680255    6988 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0416 18:00:22.680255    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:22.680255    6988 round_trippers.go:580]     Audit-Id: 79e76c8e-11df-4387-9f30-9f5f1755a5e0
	I0416 18:00:22.680255    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:22.680255    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:22.680255    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:22.680255    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:22.680255    6988 round_trippers.go:580]     Content-Length: 3925
	I0416 18:00:22.680255    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:22 GMT
	I0416 18:00:22.680255    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"594","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl" [truncated 2901 chars]
	I0416 18:00:23.181369    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:23.181855    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:23.181855    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:23.181855    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:23.186449    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:23.186582    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:23.186582    6988 round_trippers.go:580]     Audit-Id: 4bae6118-587b-4d9b-a922-3970c34bf8ba
	I0416 18:00:23.186582    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:23.186582    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:23.186673    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:23.186717    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:23.186756    6988 round_trippers.go:580]     Content-Length: 3925
	I0416 18:00:23.186756    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:23 GMT
	I0416 18:00:23.186949    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"594","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl" [truncated 2901 chars]
	I0416 18:00:23.677191    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:23.677191    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:23.677317    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:23.677317    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:23.680492    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:23.680492    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:23.680492    6988 round_trippers.go:580]     Audit-Id: a7f57610-9860-47cd-ab38-3f286c67dceb
	I0416 18:00:23.680492    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:23.680492    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:23.680492    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:23.680492    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:23.680492    6988 round_trippers.go:580]     Content-Length: 3925
	I0416 18:00:23.680492    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:23 GMT
	I0416 18:00:23.681055    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"594","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl" [truncated 2901 chars]
	I0416 18:00:24.175480    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:24.175572    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:24.175572    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:24.175572    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:24.179352    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:24.179352    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:24.179352    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:24.179352    6988 round_trippers.go:580]     Content-Length: 3925
	I0416 18:00:24.179352    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:24 GMT
	I0416 18:00:24.179352    6988 round_trippers.go:580]     Audit-Id: aacf48fe-adbc-4413-b29d-2b958ba7f686
	I0416 18:00:24.179352    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:24.179352    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:24.179352    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:24.179613    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"594","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl" [truncated 2901 chars]
	I0416 18:00:24.673856    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:24.673925    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:24.673925    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:24.673925    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:24.676592    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:24.676592    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:24.676592    6988 round_trippers.go:580]     Audit-Id: 000742e0-7f5e-446d-8a61-8bd8bd82aedc
	I0416 18:00:24.676592    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:24.676592    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:24.676592    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:24.676592    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:24.676592    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:24.676592    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:24 GMT
	I0416 18:00:24.677350    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:24.677739    6988 node_ready.go:53] node "multinode-945500-m02" has status "Ready":"False"
	I0416 18:00:25.170259    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:25.170259    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:25.170259    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:25.170259    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:25.173426    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:25.173426    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:25.173426    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:25 GMT
	I0416 18:00:25.173426    6988 round_trippers.go:580]     Audit-Id: f9c1a393-b288-45a4-98d3-52d7af11f587
	I0416 18:00:25.173426    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:25.173426    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:25.173426    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:25.173426    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:25.173426    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:25.173964    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:25.669435    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:25.669435    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:25.669435    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:25.669530    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:25.672183    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:25.672183    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:25.672183    6988 round_trippers.go:580]     Audit-Id: 56bf1cb1-d49e-4031-8ee9-9392bbe1f6c8
	I0416 18:00:25.672183    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:25.672183    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:25.672183    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:25.672183    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:25.672183    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:25.673192    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:25 GMT
	I0416 18:00:25.673265    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:26.181911    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:26.182121    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:26.182121    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:26.182121    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:26.186490    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:26.186490    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:26.186490    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:26.186490    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:26.186490    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:26.186490    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:26.186490    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:26 GMT
	I0416 18:00:26.186580    6988 round_trippers.go:580]     Audit-Id: 88264325-f44e-4d75-8f22-6b8c5c0e9719
	I0416 18:00:26.186580    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:26.186613    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:26.679044    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:26.679044    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:26.679044    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:26.679044    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:26.683356    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:26.683356    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:26.683356    6988 round_trippers.go:580]     Audit-Id: c54e17f7-7d89-4371-9a95-03073ffa0ffb
	I0416 18:00:26.683356    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:26.683356    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:26.683356    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:26.683356    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:26.683356    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:26.683527    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:26 GMT
	I0416 18:00:26.683689    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:26.683980    6988 node_ready.go:53] node "multinode-945500-m02" has status "Ready":"False"
	I0416 18:00:27.180698    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:27.180698    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:27.181090    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:27.181090    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:27.184901    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:27.184901    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:27.184901    6988 round_trippers.go:580]     Audit-Id: b36ab219-082e-454d-8277-5ffcef9ec16b
	I0416 18:00:27.184901    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:27.184901    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:27.184901    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:27.184901    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:27.185540    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:27.185540    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:27 GMT
	I0416 18:00:27.185671    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:27.678872    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:27.678872    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:27.678975    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:27.678975    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:27.682351    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:27.683004    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:27.683054    6988 round_trippers.go:580]     Audit-Id: f599c3f7-7c68-4f15-8953-bfd791eb0198
	I0416 18:00:27.683054    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:27.683054    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:27.683054    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:27.683054    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:27.683054    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:27.683054    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:27 GMT
	I0416 18:00:27.683286    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:28.183860    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:28.183860    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:28.183860    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:28.183860    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:28.186319    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:28.186319    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:28.186319    6988 round_trippers.go:580]     Audit-Id: 872de824-f646-4d43-860c-2165005c98a0
	I0416 18:00:28.186319    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:28.186319    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:28.186319    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:28.186319    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:28.186319    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:28.186319    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:28 GMT
	I0416 18:00:28.187336    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:28.670992    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:28.670992    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:28.670992    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:28.670992    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:28.675123    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:28.675123    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:28.675123    6988 round_trippers.go:580]     Audit-Id: 098493ef-9038-4b08-bf9e-667a6c61491f
	I0416 18:00:28.675123    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:28.675123    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:28.675123    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:28.675123    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:28.675123    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:28.675123    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:28 GMT
	I0416 18:00:28.675123    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:29.174836    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:29.174890    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:29.174945    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:29.174945    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:29.179018    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:29.179018    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:29.179018    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:29 GMT
	I0416 18:00:29.179018    6988 round_trippers.go:580]     Audit-Id: c31ffe7d-9164-4329-85bd-7a52ce9c45ff
	I0416 18:00:29.179018    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:29.179018    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:29.179018    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:29.179018    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:29.179018    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:29.179018    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:29.179706    6988 node_ready.go:53] node "multinode-945500-m02" has status "Ready":"False"
	I0416 18:00:29.677336    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:29.677336    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:29.677336    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:29.677336    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:29.681001    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:29.681227    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:29.681286    6988 round_trippers.go:580]     Audit-Id: 389d232b-c9c8-4769-869a-1c7205097848
	I0416 18:00:29.681330    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:29.681330    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:29.681367    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:29.681367    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:29.681367    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:29.681367    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:29 GMT
	I0416 18:00:29.681367    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:30.179989    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:30.179989    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:30.179989    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:30.179989    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:30.184557    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:30.184557    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:30.184860    6988 round_trippers.go:580]     Audit-Id: 2d0a23fe-1858-420a-8f7d-89a4ab9e2074
	I0416 18:00:30.184860    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:30.184860    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:30.184860    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:30.184860    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:30.184860    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:30.184860    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:30 GMT
	I0416 18:00:30.185147    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:30.678172    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:30.678172    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:30.678172    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:30.678172    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:30.681395    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:30.681395    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:30.681395    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:30.681395    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:30.681395    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:30.681395    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:30.681395    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:30.681395    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:30 GMT
	I0416 18:00:30.682030    6988 round_trippers.go:580]     Audit-Id: d89d2b5b-078b-40e7-a8de-db37ba442614
	I0416 18:00:30.682245    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:31.177211    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:31.177533    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:31.177533    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:31.177533    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:31.252985    6988 round_trippers.go:574] Response Status: 200 OK in 75 milliseconds
	I0416 18:00:31.252985    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:31.252985    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:31.252985    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:31.252985    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:31.252985    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:31.252985    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:31 GMT
	I0416 18:00:31.252985    6988 round_trippers.go:580]     Audit-Id: 874c3508-0079-436c-9ee6-4bfd92a9fb2a
	I0416 18:00:31.253576    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:31.253576    6988 node_ready.go:53] node "multinode-945500-m02" has status "Ready":"False"
	I0416 18:00:31.682017    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:31.682017    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:31.682017    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:31.682017    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:31.684916    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:31.685729    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:31.685729    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:31.685729    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:31 GMT
	I0416 18:00:31.685729    6988 round_trippers.go:580]     Audit-Id: d159045d-d37c-4252-bd61-8c73f50b03f8
	I0416 18:00:31.685830    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:31.685830    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:31.685830    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:31.685830    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:32.173658    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:32.173658    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:32.173658    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:32.173658    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:32.177586    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:32.177586    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:32.177586    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:32.177586    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:32 GMT
	I0416 18:00:32.177586    6988 round_trippers.go:580]     Audit-Id: d53ca0a9-698a-4e2e-92c6-bda133162c76
	I0416 18:00:32.177586    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:32.177586    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:32.177586    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:32.178475    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:32.678024    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:32.678024    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:32.678024    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:32.678024    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:32.682085    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:32.682614    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:32.682614    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:32.682614    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:32 GMT
	I0416 18:00:32.682614    6988 round_trippers.go:580]     Audit-Id: 165d0d28-6574-4108-94db-5907ad039dd6
	I0416 18:00:32.682614    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:32.682614    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:32.682684    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:32.682989    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:33.168664    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:33.168922    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:33.168922    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:33.168922    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:33.172390    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:33.172390    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:33.172390    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:33.172390    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:33.172390    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:33.172390    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:33.172390    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:33 GMT
	I0416 18:00:33.172390    6988 round_trippers.go:580]     Audit-Id: ba696923-3f1a-4e11-8165-651eef11660a
	I0416 18:00:33.173411    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:33.676259    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:33.676259    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:33.676259    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:33.676259    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:33.680629    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:33.680629    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:33.680629    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:33.681219    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:33.681219    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:33.681219    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:33 GMT
	I0416 18:00:33.681219    6988 round_trippers.go:580]     Audit-Id: 7be99938-6273-447f-8367-634cd5f0a4de
	I0416 18:00:33.681219    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:33.681531    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:33.682462    6988 node_ready.go:53] node "multinode-945500-m02" has status "Ready":"False"
	I0416 18:00:34.178701    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:34.178701    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:34.178701    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:34.178701    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:34.181286    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:34.181286    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:34.181286    6988 round_trippers.go:580]     Audit-Id: f6019dfe-ab29-48d8-9d01-ee729ec66029
	I0416 18:00:34.181286    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:34.181286    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:34.181286    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:34.181286    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:34.181286    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:34 GMT
	I0416 18:00:34.181975    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:34.669380    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:34.669668    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:34.669668    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:34.669668    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:34.672465    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:34.672465    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:34.672465    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:34.672465    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:34.672465    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:34.672465    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:34 GMT
	I0416 18:00:34.672465    6988 round_trippers.go:580]     Audit-Id: a8719766-b414-4604-94c0-e20be6a01464
	I0416 18:00:34.672465    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:34.673674    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:35.169393    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:35.169618    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:35.169692    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:35.169692    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:35.174028    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:35.174028    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:35.174209    6988 round_trippers.go:580]     Audit-Id: ea553a57-8167-487c-a417-8cf0ded53743
	I0416 18:00:35.174209    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:35.174209    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:35.174209    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:35.174209    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:35.174209    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:35 GMT
	I0416 18:00:35.174511    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:35.682247    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:35.682650    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:35.682650    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:35.682650    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:35.685938    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:35.685938    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:35.685938    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:35.685938    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:35.685938    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:35.685938    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:35.685938    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:35 GMT
	I0416 18:00:35.685938    6988 round_trippers.go:580]     Audit-Id: 82dc03b1-e6f8-433d-ac2b-277fc69a2b99
	I0416 18:00:35.686923    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:35.687544    6988 node_ready.go:53] node "multinode-945500-m02" has status "Ready":"False"
	I0416 18:00:36.182291    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:36.182393    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:36.182393    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:36.182442    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:36.190024    6988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0416 18:00:36.190024    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:36.190024    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:36.190024    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:36.190024    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:36.190024    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:36.190024    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:36 GMT
	I0416 18:00:36.190024    6988 round_trippers.go:580]     Audit-Id: a48a8529-ba4d-49a4-90a4-d4a77c7c5001
	I0416 18:00:36.190657    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:36.677065    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:36.677162    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:36.677162    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:36.677162    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:36.680646    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:36.680646    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:36.680646    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:36.680646    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:36.680646    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:36 GMT
	I0416 18:00:36.681185    6988 round_trippers.go:580]     Audit-Id: e4e94e54-d688-4263-a0ef-d154f5f4abeb
	I0416 18:00:36.681185    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:36.681185    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:36.681442    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:37.174195    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:37.174195    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:37.174634    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:37.174634    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:37.178029    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:37.178029    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:37.178830    6988 round_trippers.go:580]     Audit-Id: 55aa8476-6f9d-4256-9569-30e89b1a496b
	I0416 18:00:37.178830    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:37.178830    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:37.178830    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:37.178830    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:37.178830    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:37 GMT
	I0416 18:00:37.179087    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:37.673081    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:37.673348    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:37.673425    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:37.673425    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:37.677095    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:37.677095    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:37.677095    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:37.677095    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:37.677193    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:37.677193    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:37 GMT
	I0416 18:00:37.677193    6988 round_trippers.go:580]     Audit-Id: f84a1c1a-51f5-4ca5-aedb-2f21bb70141f
	I0416 18:00:37.677193    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:37.677583    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:38.171025    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:38.171133    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:38.171133    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:38.171133    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:38.174956    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:38.174956    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:38.174956    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:38.174956    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:38.175478    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:38 GMT
	I0416 18:00:38.175478    6988 round_trippers.go:580]     Audit-Id: ad79e752-a790-4167-88de-0fa0a1ce2c7f
	I0416 18:00:38.175478    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:38.175478    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:38.175685    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:38.176345    6988 node_ready.go:53] node "multinode-945500-m02" has status "Ready":"False"
	I0416 18:00:38.682781    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:38.682781    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:38.682781    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:38.682875    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:38.687443    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:38.687443    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:38.687443    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:38 GMT
	I0416 18:00:38.687443    6988 round_trippers.go:580]     Audit-Id: 9f833ee4-3fc1-4823-99f9-056bf39a2137
	I0416 18:00:38.687443    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:38.687443    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:38.687443    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:38.687443    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:38.687880    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:39.181718    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:39.181718    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:39.181718    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:39.181718    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:39.185234    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:39.185234    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:39.185234    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:39 GMT
	I0416 18:00:39.185234    6988 round_trippers.go:580]     Audit-Id: c944df6e-2f72-4b2f-84ed-0ef01d4bf4ad
	I0416 18:00:39.185234    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:39.185234    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:39.185234    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:39.185234    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:39.186227    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:39.679471    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:39.679471    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:39.679471    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:39.679471    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:39.683435    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:39.683435    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:39.683435    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:39.683435    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:39.683435    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:39.683435    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:39 GMT
	I0416 18:00:39.683435    6988 round_trippers.go:580]     Audit-Id: 72ce3907-afe5-4673-a364-1b0ade9a63a2
	I0416 18:00:39.683435    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:39.684439    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:40.179709    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:40.179709    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:40.179709    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:40.179709    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:40.182280    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:40.182280    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:40.182280    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:40.182280    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:40.182280    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:40.182280    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:40 GMT
	I0416 18:00:40.182280    6988 round_trippers.go:580]     Audit-Id: 15242798-963e-4292-8f78-c57c95f730a6
	I0416 18:00:40.182280    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:40.183037    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:40.183378    6988 node_ready.go:53] node "multinode-945500-m02" has status "Ready":"False"
	I0416 18:00:40.679352    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:40.679436    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:40.679436    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:40.679436    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:40.682752    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:40.682752    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:40.682752    6988 round_trippers.go:580]     Audit-Id: e11e0806-566d-477a-bcb8-8829648fc79a
	I0416 18:00:40.682752    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:40.682752    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:40.682752    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:40.682752    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:40.682752    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:40 GMT
	I0416 18:00:40.683363    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:41.181519    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:41.181623    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.181623    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.181623    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.184563    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.184563    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.184563    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.184563    6988 round_trippers.go:580]     Audit-Id: 8c5f2f81-67e0-45b9-81aa-b9f9cb72a322
	I0416 18:00:41.184563    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.185366    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.185366    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.185366    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.185630    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"630","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3268 chars]
	I0416 18:00:41.186155    6988 node_ready.go:49] node "multinode-945500-m02" has status "Ready":"True"
	I0416 18:00:41.186155    6988 node_ready.go:38] duration metric: took 18.5179332s for node "multinode-945500-m02" to be "Ready" ...
	I0416 18:00:41.186235    6988 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 18:00:41.186380    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods
	I0416 18:00:41.186380    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.186380    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.186461    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.190907    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:41.191511    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.191511    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.191511    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.191511    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.191511    6988 round_trippers.go:580]     Audit-Id: 5b40846d-502b-40b4-b4e6-b0c0c199dcda
	I0416 18:00:41.191511    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.191511    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.194735    6988 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"630"},"items":[{"metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"441","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 70406 chars]
	I0416 18:00:41.197721    6988 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-86z7h" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.197721    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 18:00:41.197721    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.197721    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.197721    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.200304    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.201307    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.201307    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.201307    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.201307    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.201307    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.201307    6988 round_trippers.go:580]     Audit-Id: ddd585b2-d4a5-4fc9-9e78-3d162e0d75cf
	I0416 18:00:41.201307    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.201671    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"441","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0416 18:00:41.202254    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 18:00:41.202254    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.202254    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.202254    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.204830    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.204830    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.204830    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.204830    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.204830    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.204830    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.204830    6988 round_trippers.go:580]     Audit-Id: 5615a17f-6d55-4784-b914-b1262342e4ef
	I0416 18:00:41.204830    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.205530    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"452","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4966 chars]
	I0416 18:00:41.206190    6988 pod_ready.go:92] pod "coredns-76f75df574-86z7h" in "kube-system" namespace has status "Ready":"True"
	I0416 18:00:41.206190    6988 pod_ready.go:81] duration metric: took 8.4686ms for pod "coredns-76f75df574-86z7h" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.206190    6988 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.206190    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-945500
	I0416 18:00:41.206190    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.206190    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.206190    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.208799    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.208799    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.208799    6988 round_trippers.go:580]     Audit-Id: ae8a0c71-2dd6-45b7-96d9-80a7e15fec82
	I0416 18:00:41.208799    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.208799    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.208799    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.208799    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.208799    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.209788    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-945500","namespace":"kube-system","uid":"245cef70-3506-471b-9bf6-dd14a2c23d8c","resourceVersion":"372","creationTimestamp":"2024-04-16T17:57:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.91.227:2379","kubernetes.io/config.hash":"c735a32dacf9631b2b4787fe99cff316","kubernetes.io/config.mirror":"c735a32dacf9631b2b4787fe99cff316","kubernetes.io/config.seen":"2024-04-16T17:57:28.101466445Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0416 18:00:41.209825    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 18:00:41.209825    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.209825    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.209825    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.211989    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.211989    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.211989    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.211989    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.211989    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.211989    6988 round_trippers.go:580]     Audit-Id: 0c5d029c-085b-4f7e-a116-d1258a75da93
	I0416 18:00:41.211989    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.211989    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.213223    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"452","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4966 chars]
	I0416 18:00:41.213811    6988 pod_ready.go:92] pod "etcd-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 18:00:41.213811    6988 pod_ready.go:81] duration metric: took 7.62ms for pod "etcd-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.213811    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.213811    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-945500
	I0416 18:00:41.213811    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.213811    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.213811    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.216448    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.216448    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.216448    6988 round_trippers.go:580]     Audit-Id: 6b2d211f-a673-4f75-931c-2de9b00a2806
	I0416 18:00:41.216448    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.216448    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.216448    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.216448    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.216448    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.217191    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-945500","namespace":"kube-system","uid":"c6ae969a-de5d-4c7e-af09-b1a5eb21f2ab","resourceVersion":"314","creationTimestamp":"2024-04-16T17:57:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.91.227:8443","kubernetes.io/config.hash":"564fae5a3e5851c815d6092b123a5395","kubernetes.io/config.mirror":"564fae5a3e5851c815d6092b123a5395","kubernetes.io/config.seen":"2024-04-16T17:57:28.101471746Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0416 18:00:41.217191    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 18:00:41.217778    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.217778    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.217778    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.219971    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.219971    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.219971    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.219971    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.219971    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.219971    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.219971    6988 round_trippers.go:580]     Audit-Id: 97c48e0c-3227-4fdb-bb53-2c5b0a99e16e
	I0416 18:00:41.219971    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.220674    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"452","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4966 chars]
	I0416 18:00:41.220674    6988 pod_ready.go:92] pod "kube-apiserver-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 18:00:41.220674    6988 pod_ready.go:81] duration metric: took 6.8627ms for pod "kube-apiserver-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.220674    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.220674    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-945500
	I0416 18:00:41.221243    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.221243    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.221243    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.223295    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.223295    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.224145    6988 round_trippers.go:580]     Audit-Id: 5ff785c8-f305-4111-b54a-6d01717ce756
	I0416 18:00:41.224182    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.224223    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.224223    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.224223    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.224315    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.224478    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-945500","namespace":"kube-system","uid":"01b937c2-9827-4240-83f0-3536fec5eb5e","resourceVersion":"345","creationTimestamp":"2024-04-16T17:57:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5db71de2029227779432bddd337fc81d","kubernetes.io/config.mirror":"5db71de2029227779432bddd337fc81d","kubernetes.io/config.seen":"2024-04-16T17:57:28.101473146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0416 18:00:41.225131    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 18:00:41.225131    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.225131    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.225131    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.231431    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 18:00:41.231431    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.231431    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.231431    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.231431    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.231431    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.231431    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.231431    6988 round_trippers.go:580]     Audit-Id: d45b4d6a-ea94-4484-87ef-fd18b35ed725
	I0416 18:00:41.231431    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"452","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4966 chars]
	I0416 18:00:41.232071    6988 pod_ready.go:92] pod "kube-controller-manager-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 18:00:41.232071    6988 pod_ready.go:81] duration metric: took 11.3966ms for pod "kube-controller-manager-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.232071    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q5bdr" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.382236    6988 request.go:629] Waited for 150.1565ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q5bdr
	I0416 18:00:41.382407    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q5bdr
	I0416 18:00:41.382407    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.382407    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.382407    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.385083    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.385083    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.385083    6988 round_trippers.go:580]     Audit-Id: b4d8ec79-02a6-45ad-9ecc-b7b22761dffb
	I0416 18:00:41.385083    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.385083    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.385083    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.385083    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.385083    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.385507    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q5bdr","generateName":"kube-proxy-","namespace":"kube-system","uid":"18f90e3f-dd52-44a3-918a-66181a779f58","resourceVersion":"614","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"83f1bde2-7175-4a0f-944e-61200d7e7177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83f1bde2-7175-4a0f-944e-61200d7e7177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5826 chars]
	I0416 18:00:41.585818    6988 request.go:629] Waited for 199.7761ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:41.585818    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:41.585818    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.586164    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.586164    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.590196    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:41.590196    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.590196    6988 round_trippers.go:580]     Audit-Id: 1d479fce-49d7-483b-a6cd-e9bad5ef24c8
	I0416 18:00:41.590196    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.590196    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.590196    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.590196    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.590196    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.590196    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"630","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3268 chars]
	I0416 18:00:41.590835    6988 pod_ready.go:92] pod "kube-proxy-q5bdr" in "kube-system" namespace has status "Ready":"True"
	I0416 18:00:41.590835    6988 pod_ready.go:81] duration metric: took 358.7431ms for pod "kube-proxy-q5bdr" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.590835    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rfxsg" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.787070    6988 request.go:629] Waited for 196.0845ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfxsg
	I0416 18:00:41.787761    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfxsg
	I0416 18:00:41.787761    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.787761    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.787761    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.791225    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:41.791225    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.791225    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.791225    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.791225    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.791225    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.791225    6988 round_trippers.go:580]     Audit-Id: 0948013e-ea2e-4863-bd44-98088c0ba200
	I0416 18:00:41.791225    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.792789    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rfxsg","generateName":"kube-proxy-","namespace":"kube-system","uid":"b740e6e0-4768-4dd4-a958-307662a92578","resourceVersion":"401","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"83f1bde2-7175-4a0f-944e-61200d7e7177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83f1bde2-7175-4a0f-944e-61200d7e7177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0416 18:00:41.990002    6988 request.go:629] Waited for 196.614ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 18:00:41.990240    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 18:00:41.990240    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.990240    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.990240    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.993828    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:41.993828    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.993828    6988 round_trippers.go:580]     Audit-Id: 604aaeac-f05a-47b3-96f5-af81155d3173
	I0416 18:00:41.993828    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.993828    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.993828    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.993828    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.993828    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:42 GMT
	I0416 18:00:41.994260    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"452","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4966 chars]
	I0416 18:00:41.994754    6988 pod_ready.go:92] pod "kube-proxy-rfxsg" in "kube-system" namespace has status "Ready":"True"
	I0416 18:00:41.994817    6988 pod_ready.go:81] duration metric: took 403.9592ms for pod "kube-proxy-rfxsg" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.994817    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:42.191736    6988 request.go:629] Waited for 196.6039ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-945500
	I0416 18:00:42.191828    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-945500
	I0416 18:00:42.191933    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:42.191933    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:42.191933    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:42.194567    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:42.194567    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:42.194567    6988 round_trippers.go:580]     Audit-Id: 6ab76f79-405f-48f9-ad04-90e78aa34737
	I0416 18:00:42.194567    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:42.194567    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:42.194567    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:42.194567    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:42.195203    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:42 GMT
	I0416 18:00:42.195382    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-945500","namespace":"kube-system","uid":"a09e52e8-1ac2-4c22-8a3d-57969fae85a9","resourceVersion":"310","creationTimestamp":"2024-04-16T17:57:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4ebc73a23d79d1dece7469fd94c931d1","kubernetes.io/config.mirror":"4ebc73a23d79d1dece7469fd94c931d1","kubernetes.io/config.seen":"2024-04-16T17:57:20.694761708Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0416 18:00:42.393042    6988 request.go:629] Waited for 196.8309ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 18:00:42.393350    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 18:00:42.393350    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:42.393434    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:42.393434    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:42.396719    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:42.397078    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:42.397078    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:42.397078    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:42.397078    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:42 GMT
	I0416 18:00:42.397078    6988 round_trippers.go:580]     Audit-Id: ff7a49f1-7963-4872-babf-4857b06f6961
	I0416 18:00:42.397078    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:42.397078    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:42.397705    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"452","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4966 chars]
	I0416 18:00:42.397705    6988 pod_ready.go:92] pod "kube-scheduler-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 18:00:42.397705    6988 pod_ready.go:81] duration metric: took 402.8649ms for pod "kube-scheduler-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:42.397705    6988 pod_ready.go:38] duration metric: took 1.2114007s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 18:00:42.398226    6988 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 18:00:42.407057    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 18:00:42.430019    6988 system_svc.go:56] duration metric: took 31.7913ms WaitForService to wait for kubelet
	I0416 18:00:42.430019    6988 kubeadm.go:576] duration metric: took 19.9677952s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 18:00:42.430019    6988 node_conditions.go:102] verifying NodePressure condition ...
	I0416 18:00:42.594801    6988 request.go:629] Waited for 164.4742ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/nodes
	I0416 18:00:42.595048    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes
	I0416 18:00:42.595048    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:42.595156    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:42.595156    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:42.600192    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 18:00:42.600192    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:42.600192    6988 round_trippers.go:580]     Audit-Id: 7201947e-da4a-45b2-acc1-266f83b267ad
	I0416 18:00:42.600296    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:42.600296    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:42.600296    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:42.600296    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:42.600296    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:42 GMT
	I0416 18:00:42.600799    6988 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"633"},"items":[{"metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"452","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"mana
gedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1" [truncated 9279 chars]
	I0416 18:00:42.601645    6988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 18:00:42.601726    6988 node_conditions.go:123] node cpu capacity is 2
	I0416 18:00:42.601726    6988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 18:00:42.601726    6988 node_conditions.go:123] node cpu capacity is 2
	I0416 18:00:42.601726    6988 node_conditions.go:105] duration metric: took 171.6974ms to run NodePressure ...
	I0416 18:00:42.601799    6988 start.go:240] waiting for startup goroutines ...
	I0416 18:00:42.601887    6988 start.go:254] writing updated cluster config ...
	I0416 18:00:42.611423    6988 ssh_runner.go:195] Run: rm -f paused
	I0416 18:00:42.727143    6988 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0416 18:00:42.728491    6988 out.go:177] * Done! kubectl is now configured to use "multinode-945500" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 16 17:57:51 multinode-945500 dockerd[1329]: time="2024-04-16T17:57:51.142641877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 17:57:51 multinode-945500 dockerd[1329]: time="2024-04-16T17:57:51.144651052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 17:57:51 multinode-945500 dockerd[1329]: time="2024-04-16T17:57:51.144685055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 17:57:51 multinode-945500 dockerd[1329]: time="2024-04-16T17:57:51.144816666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 17:57:51 multinode-945500 dockerd[1329]: time="2024-04-16T17:57:51.272898776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 17:57:51 multinode-945500 dockerd[1329]: time="2024-04-16T17:57:51.272990084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 17:57:51 multinode-945500 dockerd[1329]: time="2024-04-16T17:57:51.273003985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 17:57:51 multinode-945500 dockerd[1329]: time="2024-04-16T17:57:51.274090773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:01:05 multinode-945500 dockerd[1329]: time="2024-04-16T18:01:05.483494643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:01:05 multinode-945500 dockerd[1329]: time="2024-04-16T18:01:05.483635748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:01:05 multinode-945500 dockerd[1329]: time="2024-04-16T18:01:05.483656849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:01:05 multinode-945500 dockerd[1329]: time="2024-04-16T18:01:05.485502118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:01:05 multinode-945500 cri-dockerd[1229]: time="2024-04-16T18:01:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c72a50cfb5bdeb4ceb5279eb60fe15681ce2bc5a0b4d7bd7d08ad490736a87c7/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 16 18:01:06 multinode-945500 cri-dockerd[1229]: time="2024-04-16T18:01:06Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 16 18:01:06 multinode-945500 dockerd[1329]: time="2024-04-16T18:01:06.790007462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:01:06 multinode-945500 dockerd[1329]: time="2024-04-16T18:01:06.790158272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:01:06 multinode-945500 dockerd[1329]: time="2024-04-16T18:01:06.790278279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:01:06 multinode-945500 dockerd[1329]: time="2024-04-16T18:01:06.790482592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:01:52 multinode-945500 dockerd[1323]: 2024/04/16 18:01:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:01:52 multinode-945500 dockerd[1323]: 2024/04/16 18:01:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:01:52 multinode-945500 dockerd[1323]: 2024/04/16 18:01:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:01:52 multinode-945500 dockerd[1323]: 2024/04/16 18:01:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:01:52 multinode-945500 dockerd[1323]: 2024/04/16 18:01:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:01:52 multinode-945500 dockerd[1323]: 2024/04/16 18:01:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:01:52 multinode-945500 dockerd[1323]: 2024/04/16 18:01:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1475366123af9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   4 minutes ago       Running             busybox                   0                   c72a50cfb5bde       busybox-7fdf7869d9-jxvx2
	6ad0b1d75a1e3       cbb01a7bd410d                                                                                         7 minutes ago       Running             coredns                   0                   2ba60ece6840a       coredns-76f75df574-86z7h
	2b470472d009f       6e38f40d628db                                                                                         7 minutes ago       Running             storage-provisioner       0                   6f233a9704eee       storage-provisioner
	cd37920f1d544       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              7 minutes ago       Running             kindnet-cni               0                   d2cd68d7f406d       kindnet-tp7jl
	f56880607ce1e       a1d263b5dc5b0                                                                                         8 minutes ago       Running             kube-proxy                0                   68766d2b671ff       kube-proxy-rfxsg
	736259e5d03b5       39f995c9f1996                                                                                         8 minutes ago       Running             kube-apiserver            0                   b8699d93388d0       kube-apiserver-multinode-945500
	4a7c8d9808b66       8c390d98f50c0                                                                                         8 minutes ago       Running             kube-scheduler            0                   ecb0ceb1a3fed       kube-scheduler-multinode-945500
	91288754cb0bd       6052a25da3f97                                                                                         8 minutes ago       Running             kube-controller-manager   0                   d28c611e06055       kube-controller-manager-multinode-945500
	0cae708a3787a       3861cfcd7c04c                                                                                         8 minutes ago       Running             etcd                      0                   5f7e5b16341d1       etcd-multinode-945500
	
	
	==> coredns [6ad0b1d75a1e] <==
	[INFO] 10.244.0.3:47642 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000140809s
	[INFO] 10.244.1.2:38063 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000393824s
	[INFO] 10.244.1.2:53430 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000153309s
	[INFO] 10.244.1.2:47690 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181411s
	[INFO] 10.244.1.2:40309 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145609s
	[INFO] 10.244.1.2:60258 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000052603s
	[INFO] 10.244.1.2:43597 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000068204s
	[INFO] 10.244.1.2:53767 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061503s
	[INFO] 10.244.1.2:54777 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056603s
	[INFO] 10.244.0.3:38964 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184311s
	[INFO] 10.244.0.3:53114 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074805s
	[INFO] 10.244.0.3:36074 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062204s
	[INFO] 10.244.0.3:60668 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090906s
	[INFO] 10.244.1.2:54659 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099206s
	[INFO] 10.244.1.2:41929 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080505s
	[INFO] 10.244.1.2:40931 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059704s
	[INFO] 10.244.1.2:48577 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000058804s
	[INFO] 10.244.0.3:33415 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000283317s
	[INFO] 10.244.0.3:52256 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000109407s
	[INFO] 10.244.0.3:34542 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000222014s
	[INFO] 10.244.0.3:59509 - 5 "PTR IN 1.80.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000278017s
	[INFO] 10.244.1.2:34647 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164509s
	[INFO] 10.244.1.2:44123 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000155309s
	[INFO] 10.244.1.2:47985 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000056403s
	[INFO] 10.244.1.2:38781 - 5 "PTR IN 1.80.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000051303s
	
	
	==> describe nodes <==
	Name:               multinode-945500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-945500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=multinode-945500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T17_57_28_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 17:57:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-945500
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 18:05:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 18:01:33 +0000   Tue, 16 Apr 2024 17:57:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 18:01:33 +0000   Tue, 16 Apr 2024 17:57:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 18:01:33 +0000   Tue, 16 Apr 2024 17:57:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 18:01:33 +0000   Tue, 16 Apr 2024 17:57:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.91.227
	  Hostname:    multinode-945500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 e85d34dd6c5848b4a3ec498b43e70cda
	  System UUID:                f07a2411-3a9a-ca4a-afc3-5ddc78eea33d
	  Boot ID:                    271a6251-2183-4573-9d3f-923b343cbbd7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-jxvx2                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 coredns-76f75df574-86z7h                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m3s
	  kube-system                 etcd-multinode-945500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m15s
	  kube-system                 kindnet-tp7jl                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m3s
	  kube-system                 kube-apiserver-multinode-945500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	  kube-system                 kube-controller-manager-multinode-945500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	  kube-system                 kube-proxy-rfxsg                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m3s
	  kube-system                 kube-scheduler-multinode-945500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m16s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m1s                   kube-proxy       
	  Normal  Starting                 8m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m23s (x8 over 8m23s)  kubelet          Node multinode-945500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m23s (x8 over 8m23s)  kubelet          Node multinode-945500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m23s (x7 over 8m23s)  kubelet          Node multinode-945500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m15s                  kubelet          Node multinode-945500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m15s                  kubelet          Node multinode-945500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m15s                  kubelet          Node multinode-945500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m4s                   node-controller  Node multinode-945500 event: Registered Node multinode-945500 in Controller
	  Normal  NodeReady                7m53s                  kubelet          Node multinode-945500 status is now: NodeReady
	
	
	Name:               multinode-945500-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-945500-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=multinode-945500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_16T18_00_22_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 18:00:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-945500-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 18:05:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 18:01:22 +0000   Tue, 16 Apr 2024 18:00:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 18:01:22 +0000   Tue, 16 Apr 2024 18:00:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 18:01:22 +0000   Tue, 16 Apr 2024 18:00:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 18:01:22 +0000   Tue, 16 Apr 2024 18:00:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.91.6
	  Hostname:    multinode-945500-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6ffb3ffe1886460d8f31c8166436085f
	  System UUID:                cd85b681-7c9d-6842-b820-50fe53a2fe10
	  Boot ID:                    391147f8-cd3e-46f1-9b23-dd3a04f0f3a0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-ns8nx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 kindnet-7pg6g               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m22s
	  kube-system                 kube-proxy-q5bdr            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m12s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m22s (x2 over 5m22s)  kubelet          Node multinode-945500-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m22s (x2 over 5m22s)  kubelet          Node multinode-945500-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m22s (x2 over 5m22s)  kubelet          Node multinode-945500-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m19s                  node-controller  Node multinode-945500-m02 event: Registered Node multinode-945500-m02 in Controller
	  Normal  NodeReady                5m2s                   kubelet          Node multinode-945500-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr16 17:56] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.180108] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[ +28.712788] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.080808] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.453937] systemd-fstab-generator[983]: Ignoring "noauto" option for root device
	[  +0.161653] systemd-fstab-generator[996]: Ignoring "noauto" option for root device
	[  +0.200737] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	[  +2.669121] systemd-fstab-generator[1182]: Ignoring "noauto" option for root device
	[  +0.171244] systemd-fstab-generator[1194]: Ignoring "noauto" option for root device
	[  +0.164230] systemd-fstab-generator[1206]: Ignoring "noauto" option for root device
	[  +0.237653] systemd-fstab-generator[1221]: Ignoring "noauto" option for root device
	[Apr16 17:57] systemd-fstab-generator[1315]: Ignoring "noauto" option for root device
	[  +0.100359] kauditd_printk_skb: 205 callbacks suppressed
	[  +2.927133] systemd-fstab-generator[1513]: Ignoring "noauto" option for root device
	[  +5.699753] systemd-fstab-generator[1707]: Ignoring "noauto" option for root device
	[  +0.085837] kauditd_printk_skb: 73 callbacks suppressed
	[  +7.760431] systemd-fstab-generator[2107]: Ignoring "noauto" option for root device
	[  +0.135160] kauditd_printk_skb: 62 callbacks suppressed
	[  +7.450297] hrtimer: interrupt took 987259 ns
	[  +5.262610] systemd-fstab-generator[2292]: Ignoring "noauto" option for root device
	[  +0.195654] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.560394] kauditd_printk_skb: 51 callbacks suppressed
	[Apr16 18:01] kauditd_printk_skb: 23 callbacks suppressed
	
	
	==> etcd [0cae708a3787] <==
	{"level":"info","ts":"2024-04-16T17:57:22.024751Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 switched to configuration voters=(16790251013889734582)"}
	{"level":"info","ts":"2024-04-16T17:57:22.037022Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ba3fb579e58fbd76","local-member-id":"e902f456ac8a37b6","added-peer-id":"e902f456ac8a37b6","added-peer-peer-urls":["https://172.19.91.227:2380"]}
	{"level":"info","ts":"2024-04-16T17:57:22.036585Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-16T17:57:22.037467Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"e902f456ac8a37b6","initial-advertise-peer-urls":["https://172.19.91.227:2380"],"listen-peer-urls":["https://172.19.91.227:2380"],"advertise-client-urls":["https://172.19.91.227:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.91.227:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-16T17:57:22.037573Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-16T17:57:22.036608Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.19.91.227:2380"}
	{"level":"info","ts":"2024-04-16T17:57:22.037796Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.19.91.227:2380"}
	{"level":"info","ts":"2024-04-16T17:57:22.485441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-16T17:57:22.485773Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-16T17:57:22.486062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 received MsgPreVoteResp from e902f456ac8a37b6 at term 1"}
	{"level":"info","ts":"2024-04-16T17:57:22.486206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 became candidate at term 2"}
	{"level":"info","ts":"2024-04-16T17:57:22.486317Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 received MsgVoteResp from e902f456ac8a37b6 at term 2"}
	{"level":"info","ts":"2024-04-16T17:57:22.486459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 became leader at term 2"}
	{"level":"info","ts":"2024-04-16T17:57:22.486613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e902f456ac8a37b6 elected leader e902f456ac8a37b6 at term 2"}
	{"level":"info","ts":"2024-04-16T17:57:22.492605Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e902f456ac8a37b6","local-member-attributes":"{Name:multinode-945500 ClientURLs:[https://172.19.91.227:2379]}","request-path":"/0/members/e902f456ac8a37b6/attributes","cluster-id":"ba3fb579e58fbd76","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-16T17:57:22.493027Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T17:57:22.493291Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-16T17:57:22.495438Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-16T17:57:22.493174Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T17:57:22.501637Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-16T17:57:22.494099Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T17:57:22.508993Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.91.227:2379"}
	{"level":"info","ts":"2024-04-16T17:57:22.537458Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ba3fb579e58fbd76","local-member-id":"e902f456ac8a37b6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T17:57:22.537767Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T17:57:22.540447Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 18:05:43 up 10 min,  0 users,  load average: 0.23, 0.26, 0.17
	Linux multinode-945500 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [cd37920f1d54] <==
	I0416 18:04:38.586110       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:04:48.594017       1 main.go:223] Handling node with IPs: map[172.19.91.227:{}]
	I0416 18:04:48.594100       1 main.go:227] handling current node
	I0416 18:04:48.594110       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:04:48.594117       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:04:58.607099       1 main.go:223] Handling node with IPs: map[172.19.91.227:{}]
	I0416 18:04:58.607194       1 main.go:227] handling current node
	I0416 18:04:58.607206       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:04:58.607214       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:05:08.620705       1 main.go:223] Handling node with IPs: map[172.19.91.227:{}]
	I0416 18:05:08.620743       1 main.go:227] handling current node
	I0416 18:05:08.620754       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:05:08.620761       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:05:18.633193       1 main.go:223] Handling node with IPs: map[172.19.91.227:{}]
	I0416 18:05:18.633286       1 main.go:227] handling current node
	I0416 18:05:18.633297       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:05:18.633303       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:05:28.644033       1 main.go:223] Handling node with IPs: map[172.19.91.227:{}]
	I0416 18:05:28.644134       1 main.go:227] handling current node
	I0416 18:05:28.644146       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:05:28.644154       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:05:38.659735       1 main.go:223] Handling node with IPs: map[172.19.91.227:{}]
	I0416 18:05:38.659770       1 main.go:227] handling current node
	I0416 18:05:38.659782       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:05:38.659788       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [736259e5d03b] <==
	I0416 17:57:24.492548       1 shared_informer.go:318] Caches are synced for configmaps
	I0416 17:57:24.493015       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0416 17:57:24.493164       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0416 17:57:24.493567       1 aggregator.go:165] initial CRD sync complete...
	I0416 17:57:24.493754       1 autoregister_controller.go:141] Starting autoregister controller
	I0416 17:57:24.493855       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0416 17:57:24.493948       1 cache.go:39] Caches are synced for autoregister controller
	I0416 17:57:24.498835       1 controller.go:624] quota admission added evaluator for: namespaces
	I0416 17:57:24.572544       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0416 17:57:24.581941       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0416 17:57:25.383934       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0416 17:57:25.391363       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0416 17:57:25.391584       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0416 17:57:26.186472       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0416 17:57:26.241100       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0416 17:57:26.380286       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0416 17:57:26.389156       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.91.227]
	I0416 17:57:26.390446       1 controller.go:624] quota admission added evaluator for: endpoints
	I0416 17:57:26.395894       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0416 17:57:26.463024       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0416 17:57:27.978875       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0416 17:57:27.996061       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0416 17:57:28.010130       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0416 17:57:40.322187       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0416 17:57:40.406944       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [91288754cb0b] <==
	I0416 17:57:41.176487       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="38.505µs"
	I0416 17:57:50.419156       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="87.708µs"
	I0416 17:57:50.439046       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="77.007µs"
	I0416 17:57:52.289724       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="340.797µs"
	I0416 17:57:52.327958       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="8.879815ms"
	I0416 17:57:52.329283       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="77.899µs"
	I0416 17:57:54.522679       1 node_lifecycle_controller.go:1045] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0416 18:00:21.143291       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-945500-m02\" does not exist"
	I0416 18:00:21.160886       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7pg6g"
	I0416 18:00:21.165863       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-q5bdr"
	I0416 18:00:21.190337       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-945500-m02" podCIDRs=["10.244.1.0/24"]
	I0416 18:00:24.552622       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-945500-m02"
	I0416 18:00:24.552697       1 event.go:376] "Event occurred" object="multinode-945500-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-945500-m02 event: Registered Node multinode-945500-m02 in Controller"
	I0416 18:00:41.273225       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-945500-m02"
	I0416 18:01:05.000162       1 event.go:376] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7fdf7869d9 to 2"
	I0416 18:01:05.018037       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-ns8nx"
	I0416 18:01:05.041877       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-jxvx2"
	I0416 18:01:05.061957       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="58.524499ms"
	I0416 18:01:05.079880       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="17.398354ms"
	I0416 18:01:05.080339       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="67.502µs"
	I0416 18:01:05.093042       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="55.802µs"
	I0416 18:01:07.013162       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="7.557663ms"
	I0416 18:01:07.014558       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="1.14747ms"
	I0416 18:01:07.433900       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="7.930386ms"
	I0416 18:01:07.434257       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="37.403µs"
	
	
	==> kube-proxy [f56880607ce1] <==
	I0416 17:57:41.776688       1 server_others.go:72] "Using iptables proxy"
	I0416 17:57:41.792626       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.91.227"]
	I0416 17:57:41.867257       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 17:57:41.867331       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 17:57:41.867350       1 server_others.go:168] "Using iptables Proxier"
	I0416 17:57:41.871330       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 17:57:41.872230       1 server.go:865] "Version info" version="v1.29.3"
	I0416 17:57:41.872370       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 17:57:41.874113       1 config.go:188] "Starting service config controller"
	I0416 17:57:41.874135       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 17:57:41.874160       1 config.go:97] "Starting endpoint slice config controller"
	I0416 17:57:41.874165       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 17:57:41.876871       1 config.go:315] "Starting node config controller"
	I0416 17:57:41.876896       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 17:57:41.974693       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 17:57:41.974749       1 shared_informer.go:318] Caches are synced for service config
	I0416 17:57:41.977426       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [4a7c8d9808b6] <==
	W0416 17:57:25.449324       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0416 17:57:25.449598       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0416 17:57:25.655533       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 17:57:25.656479       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 17:57:25.692827       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0416 17:57:25.693097       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0416 17:57:25.711042       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0416 17:57:25.711136       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0416 17:57:25.720155       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 17:57:25.720353       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 17:57:25.721550       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 17:57:25.721738       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 17:57:25.738855       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 17:57:25.738995       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0416 17:57:25.765058       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0416 17:57:25.765096       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0416 17:57:25.774340       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0416 17:57:25.774569       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0416 17:57:25.791990       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0416 17:57:25.792031       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0416 17:57:25.929298       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 17:57:25.929342       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 17:57:26.119349       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 17:57:26.119818       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0416 17:57:29.235915       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 18:01:28 multinode-945500 kubelet[2114]: E0416 18:01:28.260561    2114 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 18:01:28 multinode-945500 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 18:01:28 multinode-945500 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 18:01:28 multinode-945500 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 18:01:28 multinode-945500 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 18:02:28 multinode-945500 kubelet[2114]: E0416 18:02:28.261580    2114 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 18:02:28 multinode-945500 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 18:02:28 multinode-945500 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 18:02:28 multinode-945500 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 18:02:28 multinode-945500 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 18:03:28 multinode-945500 kubelet[2114]: E0416 18:03:28.265624    2114 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 18:03:28 multinode-945500 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 18:03:28 multinode-945500 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 18:03:28 multinode-945500 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 18:03:28 multinode-945500 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 18:04:28 multinode-945500 kubelet[2114]: E0416 18:04:28.262267    2114 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 18:04:28 multinode-945500 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 18:04:28 multinode-945500 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 18:04:28 multinode-945500 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 18:04:28 multinode-945500 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 18:05:28 multinode-945500 kubelet[2114]: E0416 18:05:28.265449    2114 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 18:05:28 multinode-945500 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 18:05:28 multinode-945500 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 18:05:28 multinode-945500 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 18:05:28 multinode-945500 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 18:05:36.657421    2948 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-945500 -n multinode-945500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-945500 -n multinode-945500: (10.9140979s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-945500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/AddNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/AddNode (231.40s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (62.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-945500 status --output json --alsologtostderr
E0416 18:06:07.038731    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-945500 status --output json --alsologtostderr: exit status 2 (32.3160692s)

                                                
                                                
-- stdout --
	[{"Name":"multinode-945500","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"multinode-945500-m02","Host":"Running","Kubelet":"Running","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true},{"Name":"multinode-945500-m03","Host":"Running","Kubelet":"Stopped","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 18:06:05.097920    9744 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0416 18:06:05.160346    9744 out.go:291] Setting OutFile to fd 724 ...
	I0416 18:06:05.161271    9744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 18:06:05.161271    9744 out.go:304] Setting ErrFile to fd 940...
	I0416 18:06:05.161271    9744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 18:06:05.174265    9744 out.go:298] Setting JSON to true
	I0416 18:06:05.174265    9744 mustload.go:65] Loading cluster: multinode-945500
	I0416 18:06:05.174265    9744 notify.go:220] Checking for updates...
	I0416 18:06:05.175340    9744 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:06:05.175340    9744 status.go:255] checking status of multinode-945500 ...
	I0416 18:06:05.176075    9744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:06:07.151554    9744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:06:07.151554    9744 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:06:07.151633    9744 status.go:330] multinode-945500 host status = "Running" (err=<nil>)
	I0416 18:06:07.151689    9744 host.go:66] Checking if "multinode-945500" exists ...
	I0416 18:06:07.152461    9744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:06:09.119163    9744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:06:09.119163    9744 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:06:09.119926    9744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:06:11.474929    9744 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 18:06:11.474929    9744 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:06:11.475007    9744 host.go:66] Checking if "multinode-945500" exists ...
	I0416 18:06:11.483306    9744 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 18:06:11.483306    9744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:06:13.436964    9744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:06:13.436964    9744 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:06:13.436964    9744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:06:15.737518    9744 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 18:06:15.737739    9744 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:06:15.738124    9744 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 18:06:15.829650    9744 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.3460982s)
	I0416 18:06:15.839561    9744 ssh_runner.go:195] Run: systemctl --version
	I0416 18:06:15.857012    9744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 18:06:15.885646    9744 kubeconfig.go:125] found "multinode-945500" server: "https://172.19.91.227:8443"
	I0416 18:06:15.885646    9744 api_server.go:166] Checking apiserver status ...
	I0416 18:06:15.898631    9744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 18:06:15.931163    9744 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2058/cgroup
	W0416 18:06:15.948490    9744 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2058/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 18:06:15.959212    9744 ssh_runner.go:195] Run: ls
	I0416 18:06:15.965575    9744 api_server.go:253] Checking apiserver healthz at https://172.19.91.227:8443/healthz ...
	I0416 18:06:15.972330    9744 api_server.go:279] https://172.19.91.227:8443/healthz returned 200:
	ok
	I0416 18:06:15.972330    9744 status.go:422] multinode-945500 apiserver status = Running (err=<nil>)
	I0416 18:06:15.972330    9744 status.go:257] multinode-945500 status: &{Name:multinode-945500 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 18:06:15.972782    9744 status.go:255] checking status of multinode-945500-m02 ...
	I0416 18:06:15.973008    9744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:06:17.924824    9744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:06:17.924824    9744 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:06:17.924824    9744 status.go:330] multinode-945500-m02 host status = "Running" (err=<nil>)
	I0416 18:06:17.924824    9744 host.go:66] Checking if "multinode-945500-m02" exists ...
	I0416 18:06:17.925775    9744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:06:19.947558    9744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:06:19.947558    9744 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:06:19.947558    9744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:06:22.317853    9744 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 18:06:22.317853    9744 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:06:22.317940    9744 host.go:66] Checking if "multinode-945500-m02" exists ...
	I0416 18:06:22.329864    9744 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 18:06:22.329864    9744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:06:24.270603    9744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:06:24.271396    9744 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:06:24.271473    9744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:06:26.568422    9744 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 18:06:26.568422    9744 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:06:26.568574    9744 sshutil.go:53] new ssh client: &{IP:172.19.91.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\id_rsa Username:docker}
	I0416 18:06:26.656413    9744 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.3263034s)
	I0416 18:06:26.665058    9744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 18:06:26.687554    9744 status.go:257] multinode-945500-m02 status: &{Name:multinode-945500-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0416 18:06:26.687554    9744 status.go:255] checking status of multinode-945500-m03 ...
	I0416 18:06:26.687814    9744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:06:28.632268    9744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:06:28.632268    9744 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:06:28.632348    9744 status.go:330] multinode-945500-m03 host status = "Running" (err=<nil>)
	I0416 18:06:28.632348    9744 host.go:66] Checking if "multinode-945500-m03" exists ...
	I0416 18:06:28.632410    9744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:06:30.556678    9744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:06:30.556678    9744 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:06:30.556678    9744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:06:32.896199    9744 main.go:141] libmachine: [stdout =====>] : 172.19.83.156
	
	I0416 18:06:32.896199    9744 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:06:32.896311    9744 host.go:66] Checking if "multinode-945500-m03" exists ...
	I0416 18:06:32.905590    9744 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 18:06:32.905590    9744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:06:34.852760    9744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:06:34.853782    9744 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:06:34.853899    9744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:06:37.127755    9744 main.go:141] libmachine: [stdout =====>] : 172.19.83.156
	
	I0416 18:06:37.127755    9744 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:06:37.128416    9744 sshutil.go:53] new ssh client: &{IP:172.19.83.156 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m03\id_rsa Username:docker}
	I0416 18:06:37.231886    9744 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.3260505s)
	I0416 18:06:37.242542    9744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 18:06:37.270289    9744 status.go:257] multinode-945500-m03 status: &{Name:multinode-945500-m03 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:186: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-945500 status --output json --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-945500 -n multinode-945500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-945500 -n multinode-945500: (10.8393025s)
helpers_test.go:244: <<< TestMultiNode/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/CopyFile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-945500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-945500 logs -n 25: (7.4680215s)
helpers_test.go:252: TestMultiNode/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| mount   | C:\Users\jenkins.minikube5:/minikube-host         | mount-start-2-738600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:49 UTC |                     |
	|         | --profile mount-start-2-738600 --v 0              |                      |                   |                |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |                |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |                |                     |                     |
	|         |                                                 0 |                      |                   |                |                     |                     |
	| ssh     | mount-start-2-738600 ssh -- ls                    | mount-start-2-738600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:49 UTC | 16 Apr 24 17:49 UTC |
	|         | /minikube-host                                    |                      |                   |                |                     |                     |
	| delete  | -p mount-start-1-738600                           | mount-start-1-738600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:49 UTC | 16 Apr 24 17:50 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |                |                     |                     |
	| ssh     | mount-start-2-738600 ssh -- ls                    | mount-start-2-738600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:50 UTC | 16 Apr 24 17:50 UTC |
	|         | /minikube-host                                    |                      |                   |                |                     |                     |
	| stop    | -p mount-start-2-738600                           | mount-start-2-738600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:50 UTC | 16 Apr 24 17:50 UTC |
	| start   | -p mount-start-2-738600                           | mount-start-2-738600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:50 UTC |                     |
	| delete  | -p mount-start-2-738600                           | mount-start-2-738600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:53 UTC | 16 Apr 24 17:54 UTC |
	| delete  | -p mount-start-1-738600                           | mount-start-1-738600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:54 UTC | 16 Apr 24 17:54 UTC |
	| start   | -p multinode-945500                               | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:54 UTC | 16 Apr 24 18:00 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |                |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |                |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |                |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- apply -f                   | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- rollout                    | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | status deployment/busybox                         |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- get pods -o                | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- get pods -o                | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-jxvx2 --                       |                      |                   |                |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-ns8nx --                       |                      |                   |                |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-jxvx2 --                       |                      |                   |                |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-ns8nx --                       |                      |                   |                |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-jxvx2 -- nslookup              |                      |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-ns8nx -- nslookup              |                      |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- get pods -o                | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-jxvx2                          |                      |                   |                |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |                |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC |                     |
	|         | busybox-7fdf7869d9-jxvx2 -- sh                    |                      |                   |                |                     |                     |
	|         | -c ping -c 1 172.19.80.1                          |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-ns8nx                          |                      |                   |                |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |                |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC |                     |
	|         | busybox-7fdf7869d9-ns8nx -- sh                    |                      |                   |                |                     |                     |
	|         | -c ping -c 1 172.19.80.1                          |                      |                   |                |                     |                     |
	| node    | add -p multinode-945500 -v 3                      | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:02 UTC |                     |
	|         | --alsologtostderr                                 |                      |                   |                |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 17:54:38
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 17:54:38.458993    6988 out.go:291] Setting OutFile to fd 960 ...
	I0416 17:54:38.459581    6988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:54:38.459581    6988 out.go:304] Setting ErrFile to fd 676...
	I0416 17:54:38.459678    6988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:54:38.483191    6988 out.go:298] Setting JSON to false
	I0416 17:54:38.487192    6988 start.go:129] hostinfo: {"hostname":"minikube5","uptime":27708,"bootTime":1713262370,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0416 17:54:38.487192    6988 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0416 17:54:38.488186    6988 out.go:177] * [multinode-945500] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0416 17:54:38.489188    6988 notify.go:220] Checking for updates...
	I0416 17:54:38.489188    6988 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 17:54:38.490185    6988 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 17:54:38.490185    6988 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0416 17:54:38.491184    6988 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 17:54:38.491184    6988 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 17:54:38.493214    6988 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 17:54:43.355603    6988 out.go:177] * Using the hyperv driver based on user configuration
	I0416 17:54:43.356197    6988 start.go:297] selected driver: hyperv
	I0416 17:54:43.356197    6988 start.go:901] validating driver "hyperv" against <nil>
	I0416 17:54:43.356273    6988 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 17:54:43.396166    6988 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 17:54:43.397176    6988 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 17:54:43.397504    6988 cni.go:84] Creating CNI manager for ""
	I0416 17:54:43.397537    6988 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0416 17:54:43.397537    6988 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0416 17:54:43.397711    6988 start.go:340] cluster config:
	{Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:54:43.397711    6988 iso.go:125] acquiring lock: {Name:mka3f8eef32f5becd06d05d1d837c2a92a8fa70c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:54:43.399183    6988 out.go:177] * Starting "multinode-945500" primary control-plane node in "multinode-945500" cluster
	I0416 17:54:43.399538    6988 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 17:54:43.399538    6988 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0416 17:54:43.399538    6988 cache.go:56] Caching tarball of preloaded images
	I0416 17:54:43.399538    6988 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 17:54:43.400205    6988 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 17:54:43.400795    6988 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 17:54:43.401059    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json: {Name:mk67f15eab35e69a3277eb33417238e6d320045f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:54:43.401506    6988 start.go:360] acquireMachinesLock for multinode-945500: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 17:54:43.402049    6988 start.go:364] duration metric: took 542.9µs to acquireMachinesLock for "multinode-945500"
	I0416 17:54:43.402113    6988 start.go:93] Provisioning new machine with config: &{Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 17:54:43.402113    6988 start.go:125] createHost starting for "" (driver="hyperv")
	I0416 17:54:43.403221    6988 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 17:54:43.403542    6988 start.go:159] libmachine.API.Create for "multinode-945500" (driver="hyperv")
	I0416 17:54:43.403595    6988 client.go:168] LocalClient.Create starting
	I0416 17:54:43.404086    6988 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0416 17:54:43.404276    6988 main.go:141] libmachine: Decoding PEM data...
	I0416 17:54:43.404276    6988 main.go:141] libmachine: Parsing certificate...
	I0416 17:54:43.404276    6988 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0416 17:54:43.404276    6988 main.go:141] libmachine: Decoding PEM data...
	I0416 17:54:43.404276    6988 main.go:141] libmachine: Parsing certificate...
	I0416 17:54:43.404276    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0416 17:54:45.288246    6988 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0416 17:54:45.288342    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:54:45.288493    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0416 17:54:46.922912    6988 main.go:141] libmachine: [stdout =====>] : False
	
	I0416 17:54:46.922912    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:54:46.923010    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 17:54:48.270889    6988 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 17:54:48.270889    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:54:48.270889    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 17:54:51.466825    6988 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 17:54:51.466825    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:54:51.468671    6988 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 17:54:51.806641    6988 main.go:141] libmachine: Creating SSH key...
	I0416 17:54:52.035351    6988 main.go:141] libmachine: Creating VM...
	I0416 17:54:52.036345    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 17:54:54.656446    6988 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 17:54:54.656494    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:54:54.656633    6988 main.go:141] libmachine: Using switch "Default Switch"
	I0416 17:54:54.656633    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 17:54:56.229378    6988 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 17:54:56.229607    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:54:56.229607    6988 main.go:141] libmachine: Creating VHD
	I0416 17:54:56.229607    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0416 17:54:59.733727    6988 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5A486C23-0EFD-43D1-8BEB-4A60ACE1DF98
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0416 17:54:59.733800    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:54:59.733873    6988 main.go:141] libmachine: Writing magic tar header
	I0416 17:54:59.733915    6988 main.go:141] libmachine: Writing SSH key tar header
	I0416 17:54:59.741031    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0416 17:55:02.758991    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:02.758991    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:02.759271    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\disk.vhd' -SizeBytes 20000MB
	I0416 17:55:05.056217    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:05.056217    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:05.057316    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-945500 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0416 17:55:08.311574    6988 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-945500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0416 17:55:08.311574    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:08.311863    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-945500 -DynamicMemoryEnabled $false
	I0416 17:55:10.388584    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:10.389586    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:10.389586    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-945500 -Count 2
	I0416 17:55:12.413711    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:12.413711    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:12.414332    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-945500 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\boot2docker.iso'
	I0416 17:55:14.741711    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:14.741711    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:14.741711    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-945500 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\disk.vhd'
	I0416 17:55:17.003645    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:17.003645    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:17.003645    6988 main.go:141] libmachine: Starting VM...
	I0416 17:55:17.003645    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-945500
	I0416 17:55:19.573472    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:19.573700    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:19.573700    6988 main.go:141] libmachine: Waiting for host to start...
	I0416 17:55:19.573790    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:21.624051    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:21.624051    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:21.624771    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:55:23.884692    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:23.884692    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:24.892318    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:26.899190    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:26.899190    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:26.899348    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:55:29.176655    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:29.176655    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:30.177215    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:32.143102    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:32.143102    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:32.143464    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:55:34.404986    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:34.405261    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:35.419315    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:37.438553    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:37.438958    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:37.438958    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:55:39.692795    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:39.692795    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:40.700997    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:42.744138    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:42.744982    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:42.745064    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:55:45.083348    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:55:45.083348    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:45.083448    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:47.049900    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:47.050444    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:47.050523    6988 machine.go:94] provisionDockerMachine start ...
	I0416 17:55:47.050566    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:49.000414    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:49.000414    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:49.000537    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:55:51.284377    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:55:51.285296    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:51.290721    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:55:51.303784    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.227 22 <nil> <nil>}
	I0416 17:55:51.303784    6988 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 17:55:51.430251    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 17:55:51.430320    6988 buildroot.go:166] provisioning hostname "multinode-945500"
	I0416 17:55:51.430320    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:53.414239    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:53.414239    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:53.414512    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:55:55.729573    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:55:55.729573    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:55.733714    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:55:55.734245    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.227 22 <nil> <nil>}
	I0416 17:55:55.734245    6988 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-945500 && echo "multinode-945500" | sudo tee /etc/hostname
	I0416 17:55:55.888906    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-945500
	
	I0416 17:55:55.888975    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:57.782302    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:57.782302    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:57.782786    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:00.073834    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:00.073834    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:00.078560    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:56:00.078657    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.227 22 <nil> <nil>}
	I0416 17:56:00.078657    6988 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-945500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-945500/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-945500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 17:56:00.230030    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 17:56:00.230079    6988 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 17:56:00.230079    6988 buildroot.go:174] setting up certificates
	I0416 17:56:00.230079    6988 provision.go:84] configureAuth start
	I0416 17:56:00.230182    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:02.147449    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:02.147449    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:02.147449    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:04.449327    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:04.450388    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:04.450388    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:06.443860    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:06.443860    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:06.444760    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:08.814817    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:08.814817    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:08.814817    6988 provision.go:143] copyHostCerts
	I0416 17:56:08.815787    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 17:56:08.816004    6988 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 17:56:08.816004    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 17:56:08.816371    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 17:56:08.817376    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 17:56:08.817582    6988 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 17:56:08.817582    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 17:56:08.817582    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 17:56:08.818480    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 17:56:08.818480    6988 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 17:56:08.818480    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 17:56:08.819278    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 17:56:08.820184    6988 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-945500 san=[127.0.0.1 172.19.91.227 localhost minikube multinode-945500]
	I0416 17:56:09.120922    6988 provision.go:177] copyRemoteCerts
	I0416 17:56:09.129891    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 17:56:09.129891    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:11.105788    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:11.105788    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:11.105788    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:13.452243    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:13.452243    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:13.452604    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 17:56:13.553822    6988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.42368s)
	I0416 17:56:13.553822    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 17:56:13.553822    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 17:56:13.595187    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 17:56:13.595187    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0416 17:56:13.635052    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 17:56:13.635528    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 17:56:13.675952    6988 provision.go:87] duration metric: took 13.4440865s to configureAuth
	I0416 17:56:13.676049    6988 buildroot.go:189] setting minikube options for container-runtime
	I0416 17:56:13.676421    6988 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 17:56:13.676504    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:15.610838    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:15.610926    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:15.610926    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:17.912484    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:17.913491    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:17.916946    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:56:17.917531    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.227 22 <nil> <nil>}
	I0416 17:56:17.917531    6988 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 17:56:18.061063    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 17:56:18.061063    6988 buildroot.go:70] root file system type: tmpfs
	I0416 17:56:18.061690    6988 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 17:56:18.061690    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:20.049603    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:20.049603    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:20.049978    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:22.383521    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:22.383521    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:22.387896    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:56:22.388601    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.227 22 <nil> <nil>}
	I0416 17:56:22.388601    6988 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 17:56:22.561164    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 17:56:22.561269    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:24.443674    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:24.444091    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:24.444193    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:26.758959    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:26.758959    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:26.765429    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:56:26.765429    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.227 22 <nil> <nil>}
	I0416 17:56:26.765957    6988 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 17:56:28.704221    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 17:56:28.704221    6988 machine.go:97] duration metric: took 41.6513356s to provisionDockerMachine
	I0416 17:56:28.704317    6988 client.go:171] duration metric: took 1m45.2947032s to LocalClient.Create
	I0416 17:56:28.704398    6988 start.go:167] duration metric: took 1m45.2948041s to libmachine.API.Create "multinode-945500"
	I0416 17:56:28.704398    6988 start.go:293] postStartSetup for "multinode-945500" (driver="hyperv")
	I0416 17:56:28.704489    6988 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 17:56:28.714148    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 17:56:28.714148    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:30.638973    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:30.638973    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:30.639089    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:32.961564    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:32.961564    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:32.961564    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 17:56:33.069322    6988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3549265s)
	I0416 17:56:33.078710    6988 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 17:56:33.085331    6988 command_runner.go:130] > NAME=Buildroot
	I0416 17:56:33.085331    6988 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0416 17:56:33.085331    6988 command_runner.go:130] > ID=buildroot
	I0416 17:56:33.085331    6988 command_runner.go:130] > VERSION_ID=2023.02.9
	I0416 17:56:33.085331    6988 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0416 17:56:33.086070    6988 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 17:56:33.086171    6988 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 17:56:33.086945    6988 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 17:56:33.088129    6988 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 17:56:33.088129    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 17:56:33.106615    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 17:56:33.129263    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 17:56:33.174677    6988 start.go:296] duration metric: took 4.469934s for postStartSetup
	I0416 17:56:33.177364    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:35.133709    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:35.133709    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:35.133796    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:37.452577    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:37.452577    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:37.453529    6988 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 17:56:37.455914    6988 start.go:128] duration metric: took 1m54.0472303s to createHost
	I0416 17:56:37.455914    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:39.425449    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:39.425449    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:39.426011    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:41.744115    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:41.744115    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:41.748497    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:56:41.748631    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.227 22 <nil> <nil>}
	I0416 17:56:41.748631    6988 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 17:56:41.875115    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713290202.039643702
	
	I0416 17:56:41.875272    6988 fix.go:216] guest clock: 1713290202.039643702
	I0416 17:56:41.875272    6988 fix.go:229] Guest: 2024-04-16 17:56:42.039643702 +0000 UTC Remote: 2024-04-16 17:56:37.4559145 +0000 UTC m=+119.121500601 (delta=4.583729202s)
	I0416 17:56:41.875399    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:43.872191    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:43.873117    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:43.873117    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:46.207797    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:46.207797    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:46.213575    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:56:46.213575    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.227 22 <nil> <nil>}
	I0416 17:56:46.213575    6988 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713290201
	I0416 17:56:46.370971    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 17:56:41 UTC 2024
	
	I0416 17:56:46.370971    6988 fix.go:236] clock set: Tue Apr 16 17:56:41 UTC 2024
	 (err=<nil>)
	I0416 17:56:46.371058    6988 start.go:83] releasing machines lock for "multinode-945500", held for 2m2.9620339s
	I0416 17:56:46.371284    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:48.308157    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:48.308984    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:48.309041    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:50.575031    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:50.575031    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:50.579218    6988 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 17:56:50.579218    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:50.586441    6988 ssh_runner.go:195] Run: cat /version.json
	I0416 17:56:50.586979    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:52.634472    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:52.634472    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:52.634472    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:52.639621    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:52.639621    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:52.639621    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:55.047917    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:55.048488    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:55.048917    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 17:56:55.065759    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:55.066462    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:55.066602    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 17:56:55.354145    6988 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0416 17:56:55.354145    6988 command_runner.go:130] > {"iso_version": "v1.33.0-1713236417-18649", "kicbase_version": "v0.0.43-1713215244-18647", "minikube_version": "v1.33.0-beta.0", "commit": "4ec1a3e88a9f3ffb3930e555284d907468ae83a6"}
	I0416 17:56:55.354145    6988 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.7746557s)
	I0416 17:56:55.354145    6988 ssh_runner.go:235] Completed: cat /version.json: (4.7668953s)
	I0416 17:56:55.366453    6988 ssh_runner.go:195] Run: systemctl --version
	I0416 17:56:55.375220    6988 command_runner.go:130] > systemd 252 (252)
	I0416 17:56:55.375220    6988 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0416 17:56:55.384285    6988 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0416 17:56:55.392020    6988 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0416 17:56:55.392567    6988 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 17:56:55.401209    6988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 17:56:55.426637    6988 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0416 17:56:55.427403    6988 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 17:56:55.427503    6988 start.go:494] detecting cgroup driver to use...
	I0416 17:56:55.427534    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 17:56:55.457110    6988 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0416 17:56:55.470104    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 17:56:55.494070    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 17:56:55.511268    6988 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 17:56:55.523954    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 17:56:55.549161    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 17:56:55.576216    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 17:56:55.602400    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 17:56:55.630572    6988 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 17:56:55.656816    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 17:56:55.683825    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 17:56:55.710767    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 17:56:55.737864    6988 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 17:56:55.753678    6988 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0416 17:56:55.761926    6988 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 17:56:55.794919    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:56:55.964839    6988 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 17:56:55.993258    6988 start.go:494] detecting cgroup driver to use...
	I0416 17:56:56.002807    6988 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 17:56:56.020460    6988 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0416 17:56:56.020914    6988 command_runner.go:130] > [Unit]
	I0416 17:56:56.020998    6988 command_runner.go:130] > Description=Docker Application Container Engine
	I0416 17:56:56.020998    6988 command_runner.go:130] > Documentation=https://docs.docker.com
	I0416 17:56:56.020998    6988 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0416 17:56:56.020998    6988 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0416 17:56:56.021071    6988 command_runner.go:130] > StartLimitBurst=3
	I0416 17:56:56.021071    6988 command_runner.go:130] > StartLimitIntervalSec=60
	I0416 17:56:56.021071    6988 command_runner.go:130] > [Service]
	I0416 17:56:56.021071    6988 command_runner.go:130] > Type=notify
	I0416 17:56:56.021071    6988 command_runner.go:130] > Restart=on-failure
	I0416 17:56:56.021071    6988 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0416 17:56:56.021156    6988 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0416 17:56:56.021156    6988 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0416 17:56:56.021156    6988 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0416 17:56:56.021241    6988 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0416 17:56:56.021281    6988 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0416 17:56:56.021354    6988 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0416 17:56:56.021427    6988 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0416 17:56:56.021427    6988 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0416 17:56:56.021427    6988 command_runner.go:130] > ExecStart=
	I0416 17:56:56.021508    6988 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0416 17:56:56.021508    6988 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0416 17:56:56.021586    6988 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0416 17:56:56.021586    6988 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0416 17:56:56.021663    6988 command_runner.go:130] > LimitNOFILE=infinity
	I0416 17:56:56.021663    6988 command_runner.go:130] > LimitNPROC=infinity
	I0416 17:56:56.021663    6988 command_runner.go:130] > LimitCORE=infinity
	I0416 17:56:56.021663    6988 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0416 17:56:56.021663    6988 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0416 17:56:56.021738    6988 command_runner.go:130] > TasksMax=infinity
	I0416 17:56:56.021738    6988 command_runner.go:130] > TimeoutStartSec=0
	I0416 17:56:56.021738    6988 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0416 17:56:56.021738    6988 command_runner.go:130] > Delegate=yes
	I0416 17:56:56.021738    6988 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0416 17:56:56.021811    6988 command_runner.go:130] > KillMode=process
	I0416 17:56:56.021811    6988 command_runner.go:130] > [Install]
	I0416 17:56:56.021811    6988 command_runner.go:130] > WantedBy=multi-user.target
	I0416 17:56:56.032694    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 17:56:56.060059    6988 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 17:56:56.101716    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 17:56:56.131287    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 17:56:56.163190    6988 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 17:56:56.210983    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 17:56:56.231971    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 17:56:56.261397    6988 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0416 17:56:56.272666    6988 ssh_runner.go:195] Run: which cri-dockerd
	I0416 17:56:56.276995    6988 command_runner.go:130] > /usr/bin/cri-dockerd
	I0416 17:56:56.286591    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 17:56:56.299870    6988 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 17:56:56.337571    6988 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 17:56:56.500406    6988 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 17:56:56.646617    6988 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 17:56:56.646617    6988 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 17:56:56.690996    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:56:56.871261    6988 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 17:56:59.295937    6988 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4242935s)
	I0416 17:56:59.304599    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0416 17:56:59.333610    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 17:56:59.361657    6988 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0416 17:56:59.541548    6988 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0416 17:56:59.705672    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:56:59.866404    6988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0416 17:56:59.907640    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 17:56:59.939748    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:57:00.107406    6988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0416 17:57:00.200852    6988 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0416 17:57:00.212214    6988 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0416 17:57:00.220777    6988 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0416 17:57:00.220777    6988 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0416 17:57:00.220777    6988 command_runner.go:130] > Device: 0,22	Inode: 885         Links: 1
	I0416 17:57:00.220777    6988 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0416 17:57:00.220777    6988 command_runner.go:130] > Access: 2024-04-16 17:57:00.296362377 +0000
	I0416 17:57:00.220777    6988 command_runner.go:130] > Modify: 2024-04-16 17:57:00.296362377 +0000
	I0416 17:57:00.220777    6988 command_runner.go:130] > Change: 2024-04-16 17:57:00.300362562 +0000
	I0416 17:57:00.220777    6988 command_runner.go:130] >  Birth: -
	I0416 17:57:00.220777    6988 start.go:562] Will wait 60s for crictl version
	I0416 17:57:00.230775    6988 ssh_runner.go:195] Run: which crictl
	I0416 17:57:00.235786    6988 command_runner.go:130] > /usr/bin/crictl
	I0416 17:57:00.245023    6988 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 17:57:00.292622    6988 command_runner.go:130] > Version:  0.1.0
	I0416 17:57:00.292622    6988 command_runner.go:130] > RuntimeName:  docker
	I0416 17:57:00.292622    6988 command_runner.go:130] > RuntimeVersion:  26.0.1
	I0416 17:57:00.292739    6988 command_runner.go:130] > RuntimeApiVersion:  v1
	I0416 17:57:00.292794    6988 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0416 17:57:00.301388    6988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 17:57:00.331067    6988 command_runner.go:130] > 26.0.1
	I0416 17:57:00.337439    6988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 17:57:00.365025    6988 command_runner.go:130] > 26.0.1
	I0416 17:57:00.367212    6988 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
	I0416 17:57:00.367413    6988 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0416 17:57:00.371515    6988 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0416 17:57:00.371597    6988 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0416 17:57:00.371597    6988 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0416 17:57:00.371597    6988 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:6f:a4 Flags:up|broadcast|multicast|running}
	I0416 17:57:00.374158    6988 ip.go:210] interface addr: fe80::6b96:eca7:5afa:def5/64
	I0416 17:57:00.374158    6988 ip.go:210] interface addr: 172.19.80.1/20
	I0416 17:57:00.380883    6988 ssh_runner.go:195] Run: grep 172.19.80.1	host.minikube.internal$ /etc/hosts
	I0416 17:57:00.386921    6988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 17:57:00.407839    6988 kubeadm.go:877] updating cluster {Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.29.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.91.227 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 17:57:00.407839    6988 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 17:57:00.416191    6988 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 17:57:00.437198    6988 docker.go:685] Got preloaded images: 
	I0416 17:57:00.437198    6988 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0416 17:57:00.446472    6988 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0416 17:57:00.461564    6988 command_runner.go:139] > {"Repositories":{}}
	I0416 17:57:00.472373    6988 ssh_runner.go:195] Run: which lz4
	I0416 17:57:00.477412    6988 command_runner.go:130] > /usr/bin/lz4
	I0416 17:57:00.477412    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0416 17:57:00.487276    6988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 17:57:00.492861    6988 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 17:57:00.493543    6988 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 17:57:00.493600    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0416 17:57:01.970587    6988 docker.go:649] duration metric: took 1.4924844s to copy over tarball
	I0416 17:57:01.979028    6988 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 17:57:10.810575    6988 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.831045s)
	I0416 17:57:10.810689    6988 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 17:57:10.875450    6988 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0416 17:57:10.895935    6988 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.29.3":"sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533","registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c":"sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.29.3":"sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3","registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104":"sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.29.3":"sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392","registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863":"sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b
5bbe4f71784e392"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.29.3":"sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b","registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a":"sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0416 17:57:10.895935    6988 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0416 17:57:10.938742    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:57:11.136149    6988 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 17:57:13.733531    6988 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5972349s)
	I0416 17:57:13.742898    6988 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 17:57:13.765918    6988 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.29.3
	I0416 17:57:13.765918    6988 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.29.3
	I0416 17:57:13.765918    6988 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.29.3
	I0416 17:57:13.765918    6988 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.29.3
	I0416 17:57:13.765918    6988 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0416 17:57:13.765918    6988 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0416 17:57:13.765918    6988 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0416 17:57:13.765918    6988 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:57:13.765918    6988 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0416 17:57:13.765918    6988 cache_images.go:84] Images are preloaded, skipping loading
	I0416 17:57:13.765918    6988 kubeadm.go:928] updating node { 172.19.91.227 8443 v1.29.3 docker true true} ...
	I0416 17:57:13.766906    6988 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-945500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.91.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 17:57:13.774901    6988 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0416 17:57:13.804585    6988 command_runner.go:130] > cgroupfs
	I0416 17:57:13.804682    6988 cni.go:84] Creating CNI manager for ""
	I0416 17:57:13.804682    6988 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0416 17:57:13.804682    6988 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 17:57:13.804682    6988 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.91.227 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-945500 NodeName:multinode-945500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.91.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.91.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 17:57:13.804682    6988 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.91.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-945500"
	  kubeletExtraArgs:
	    node-ip: 172.19.91.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.91.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 17:57:13.813761    6988 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 17:57:13.830081    6988 command_runner.go:130] > kubeadm
	I0416 17:57:13.830081    6988 command_runner.go:130] > kubectl
	I0416 17:57:13.830081    6988 command_runner.go:130] > kubelet
	I0416 17:57:13.830165    6988 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 17:57:13.838770    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 17:57:13.852826    6988 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0416 17:57:13.878799    6988 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 17:57:13.905862    6988 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0416 17:57:13.943017    6988 ssh_runner.go:195] Run: grep 172.19.91.227	control-plane.minikube.internal$ /etc/hosts
	I0416 17:57:13.949214    6988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.91.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 17:57:13.980273    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:57:14.153644    6988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 17:57:14.177658    6988 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500 for IP: 172.19.91.227
	I0416 17:57:14.178687    6988 certs.go:194] generating shared ca certs ...
	I0416 17:57:14.178687    6988 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:14.179455    6988 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0416 17:57:14.179902    6988 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0416 17:57:14.180190    6988 certs.go:256] generating profile certs ...
	I0416 17:57:14.180755    6988 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\client.key
	I0416 17:57:14.180755    6988 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\client.crt with IP's: []
	I0416 17:57:14.411174    6988 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\client.crt ...
	I0416 17:57:14.411174    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\client.crt: {Name:mkc0623b015c4c96d85b8b3b13eb2cc6d3ac8763 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:14.412171    6988 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\client.key ...
	I0416 17:57:14.412171    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\client.key: {Name:mkbd9c01c6892e02b0a8d9c434e98a742e87c2cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:14.413058    6988 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key.e3ea85af
	I0416 17:57:14.414154    6988 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt.e3ea85af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.91.227]
	I0416 17:57:14.575473    6988 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt.e3ea85af ...
	I0416 17:57:14.575473    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt.e3ea85af: {Name:mk62c37573433811afa986b89a237b6c7bb0d1df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:14.576358    6988 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key.e3ea85af ...
	I0416 17:57:14.576358    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key.e3ea85af: {Name:mk6c23ff826064c327d5a977affe1877b10d9b72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:14.577574    6988 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt.e3ea85af -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt
	I0416 17:57:14.590486    6988 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key.e3ea85af -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key
	I0416 17:57:14.590795    6988 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.key
	I0416 17:57:14.590795    6988 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.crt with IP's: []
	I0416 17:57:14.794779    6988 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.crt ...
	I0416 17:57:14.795779    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.crt: {Name:mk40c9063a89a73b56bd4ccd89e15d6559ba1e37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:14.796782    6988 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.key ...
	I0416 17:57:14.796782    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.key: {Name:mk5e95084b6a4adeb7806da3f2d851d8919dced5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:14.798528    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 17:57:14.798760    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0416 17:57:14.799041    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 17:57:14.799237    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 17:57:14.799423    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0416 17:57:14.799630    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0416 17:57:14.799827    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0416 17:57:14.806003    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0416 17:57:14.809977    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem (1338 bytes)
	W0416 17:57:14.809977    6988 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324_empty.pem, impossibly tiny 0 bytes
	I0416 17:57:14.809977    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0416 17:57:14.809977    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0416 17:57:14.811027    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0416 17:57:14.811027    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0416 17:57:14.811027    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem (1708 bytes)
	I0416 17:57:14.811551    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /usr/share/ca-certificates/53242.pem
	I0416 17:57:14.811650    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:57:14.811737    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem -> /usr/share/ca-certificates/5324.pem
	I0416 17:57:14.812935    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 17:57:14.852949    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 17:57:14.891959    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 17:57:14.931152    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 17:57:14.968412    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0416 17:57:15.008983    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 17:57:15.048515    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 17:57:15.089091    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 17:57:15.125356    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /usr/share/ca-certificates/53242.pem (1708 bytes)
	I0416 17:57:15.162621    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 17:57:15.205246    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem --> /usr/share/ca-certificates/5324.pem (1338 bytes)
	I0416 17:57:15.248985    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 17:57:15.289002    6988 ssh_runner.go:195] Run: openssl version
	I0416 17:57:15.296351    6988 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0416 17:57:15.308333    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 17:57:15.335334    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:57:15.341349    6988 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 16 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:57:15.342189    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:57:15.351026    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:57:15.358591    6988 command_runner.go:130] > b5213941
	I0416 17:57:15.367034    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 17:57:15.391467    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5324.pem && ln -fs /usr/share/ca-certificates/5324.pem /etc/ssl/certs/5324.pem"
	I0416 17:57:15.416387    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5324.pem
	I0416 17:57:15.423831    6988 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 16 16:35 /usr/share/ca-certificates/5324.pem
	I0416 17:57:15.423957    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:35 /usr/share/ca-certificates/5324.pem
	I0416 17:57:15.434442    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5324.pem
	I0416 17:57:15.442459    6988 command_runner.go:130] > 51391683
	I0416 17:57:15.451530    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5324.pem /etc/ssl/certs/51391683.0"
	I0416 17:57:15.480393    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/53242.pem && ln -fs /usr/share/ca-certificates/53242.pem /etc/ssl/certs/53242.pem"
	I0416 17:57:15.509124    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/53242.pem
	I0416 17:57:15.515721    6988 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 16 16:35 /usr/share/ca-certificates/53242.pem
	I0416 17:57:15.515827    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:35 /usr/share/ca-certificates/53242.pem
	I0416 17:57:15.524021    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/53242.pem
	I0416 17:57:15.533694    6988 command_runner.go:130] > 3ec20f2e
	I0416 17:57:15.541647    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/53242.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 17:57:15.567570    6988 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 17:57:15.573415    6988 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 17:57:15.573840    6988 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 17:57:15.574281    6988 kubeadm.go:391] StartCluster: {Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
9.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.91.227 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:57:15.580506    6988 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0416 17:57:15.612292    6988 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0416 17:57:15.627466    6988 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0416 17:57:15.628097    6988 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0416 17:57:15.628097    6988 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0416 17:57:15.635032    6988 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 17:57:15.660479    6988 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 17:57:15.676695    6988 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0416 17:57:15.676792    6988 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0416 17:57:15.676792    6988 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0416 17:57:15.676855    6988 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 17:57:15.676918    6988 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 17:57:15.676973    6988 kubeadm.go:156] found existing configuration files:
	
	I0416 17:57:15.684985    6988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 17:57:15.700012    6988 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 17:57:15.700126    6988 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 17:57:15.708938    6988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 17:57:15.734829    6988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 17:57:15.747861    6988 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 17:57:15.748201    6988 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 17:57:15.756696    6988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 17:57:15.784559    6988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 17:57:15.804131    6988 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 17:57:15.804131    6988 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 17:57:15.815130    6988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 17:57:15.838118    6988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 17:57:15.854130    6988 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 17:57:15.854130    6988 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 17:57:15.862912    6988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 17:57:15.876128    6988 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 17:57:16.053541    6988 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 17:57:16.053541    6988 command_runner.go:130] > [init] Using Kubernetes version: v1.29.3
	I0416 17:57:16.053865    6988 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 17:57:16.053865    6988 command_runner.go:130] > [preflight] Running pre-flight checks
	I0416 17:57:16.200461    6988 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 17:57:16.200461    6988 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 17:57:16.200461    6988 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 17:57:16.200461    6988 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 17:57:16.200461    6988 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 17:57:16.200461    6988 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 17:57:16.451494    6988 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 17:57:16.452473    6988 out.go:204]   - Generating certificates and keys ...
	I0416 17:57:16.451494    6988 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 17:57:16.453479    6988 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 17:57:16.453479    6988 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0416 17:57:16.453479    6988 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0416 17:57:16.453479    6988 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 17:57:16.705308    6988 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0416 17:57:16.705409    6988 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0416 17:57:16.859312    6988 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0416 17:57:16.859312    6988 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0416 17:57:17.049120    6988 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0416 17:57:17.049237    6988 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0416 17:57:17.314616    6988 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0416 17:57:17.314728    6988 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0416 17:57:17.509835    6988 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0416 17:57:17.509835    6988 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0416 17:57:17.510247    6988 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-945500] and IPs [172.19.91.227 127.0.0.1 ::1]
	I0416 17:57:17.510247    6988 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-945500] and IPs [172.19.91.227 127.0.0.1 ::1]
	I0416 17:57:17.791919    6988 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0416 17:57:17.791919    6988 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0416 17:57:17.792356    6988 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-945500] and IPs [172.19.91.227 127.0.0.1 ::1]
	I0416 17:57:17.792356    6988 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-945500] and IPs [172.19.91.227 127.0.0.1 ::1]
	I0416 17:57:17.995022    6988 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0416 17:57:17.995106    6988 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0416 17:57:18.220639    6988 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0416 17:57:18.220729    6988 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0416 17:57:18.582174    6988 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0416 17:57:18.582274    6988 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0416 17:57:18.582480    6988 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 17:57:18.582554    6988 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 17:57:18.743963    6988 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 17:57:18.744564    6988 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 17:57:19.067769    6988 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 17:57:19.068120    6988 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 17:57:19.240331    6988 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 17:57:19.240672    6988 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 17:57:19.461195    6988 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 17:57:19.461195    6988 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 17:57:19.652943    6988 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 17:57:19.653442    6988 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 17:57:19.654516    6988 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 17:57:19.654516    6988 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 17:57:19.660559    6988 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 17:57:19.660559    6988 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 17:57:19.661534    6988 out.go:204]   - Booting up control plane ...
	I0416 17:57:19.661534    6988 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 17:57:19.661534    6988 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 17:57:19.662544    6988 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 17:57:19.662544    6988 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 17:57:19.663540    6988 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 17:57:19.663540    6988 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 17:57:19.684534    6988 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 17:57:19.685153    6988 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 17:57:19.687532    6988 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 17:57:19.687532    6988 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 17:57:19.687532    6988 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 17:57:19.687532    6988 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0416 17:57:19.860703    6988 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 17:57:19.860788    6988 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 17:57:26.366044    6988 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.507200 seconds
	I0416 17:57:26.366044    6988 command_runner.go:130] > [apiclient] All control plane components are healthy after 6.507200 seconds
	I0416 17:57:26.385213    6988 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 17:57:26.385213    6988 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 17:57:26.408456    6988 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 17:57:26.408456    6988 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 17:57:26.942416    6988 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0416 17:57:26.942416    6988 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 17:57:26.943198    6988 kubeadm.go:309] [mark-control-plane] Marking the node multinode-945500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 17:57:26.943369    6988 command_runner.go:130] > [mark-control-plane] Marking the node multinode-945500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 17:57:27.456093    6988 kubeadm.go:309] [bootstrap-token] Using token: v7bkxo.pzxgmh7iiytdovwq
	I0416 17:57:27.456235    6988 command_runner.go:130] > [bootstrap-token] Using token: v7bkxo.pzxgmh7iiytdovwq
	I0416 17:57:27.456953    6988 out.go:204]   - Configuring RBAC rules ...
	I0416 17:57:27.457407    6988 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 17:57:27.457407    6988 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 17:57:27.473244    6988 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 17:57:27.473244    6988 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 17:57:27.485961    6988 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 17:57:27.486019    6988 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 17:57:27.492510    6988 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 17:57:27.492510    6988 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 17:57:27.496129    6988 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 17:57:27.496129    6988 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 17:57:27.501092    6988 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 17:57:27.501753    6988 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 17:57:27.517045    6988 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 17:57:27.517045    6988 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 17:57:27.829288    6988 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 17:57:27.829833    6988 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0416 17:57:27.880030    6988 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 17:57:27.880030    6988 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0416 17:57:27.883021    6988 kubeadm.go:309] 
	I0416 17:57:27.883395    6988 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0416 17:57:27.883467    6988 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 17:57:27.883558    6988 kubeadm.go:309] 
	I0416 17:57:27.883809    6988 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 17:57:27.883809    6988 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0416 17:57:27.883877    6988 kubeadm.go:309] 
	I0416 17:57:27.883877    6988 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 17:57:27.883877    6988 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0416 17:57:27.883877    6988 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 17:57:27.883877    6988 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 17:57:27.883877    6988 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 17:57:27.883877    6988 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 17:57:27.883877    6988 kubeadm.go:309] 
	I0416 17:57:27.883877    6988 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0416 17:57:27.883877    6988 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 17:57:27.883877    6988 kubeadm.go:309] 
	I0416 17:57:27.884765    6988 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 17:57:27.884765    6988 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 17:57:27.884765    6988 kubeadm.go:309] 
	I0416 17:57:27.884765    6988 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 17:57:27.884765    6988 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0416 17:57:27.884765    6988 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 17:57:27.884765    6988 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 17:57:27.884765    6988 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 17:57:27.884765    6988 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 17:57:27.884765    6988 kubeadm.go:309] 
	I0416 17:57:27.884765    6988 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0416 17:57:27.884765    6988 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 17:57:27.885775    6988 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0416 17:57:27.885775    6988 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 17:57:27.885775    6988 kubeadm.go:309] 
	I0416 17:57:27.885775    6988 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token v7bkxo.pzxgmh7iiytdovwq \
	I0416 17:57:27.885775    6988 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token v7bkxo.pzxgmh7iiytdovwq \
	I0416 17:57:27.885775    6988 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c \
	I0416 17:57:27.885775    6988 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c \
	I0416 17:57:27.885775    6988 kubeadm.go:309] 	--control-plane 
	I0416 17:57:27.885775    6988 command_runner.go:130] > 	--control-plane 
	I0416 17:57:27.885775    6988 kubeadm.go:309] 
	I0416 17:57:27.886749    6988 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0416 17:57:27.886749    6988 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 17:57:27.886749    6988 kubeadm.go:309] 
	I0416 17:57:27.886749    6988 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token v7bkxo.pzxgmh7iiytdovwq \
	I0416 17:57:27.886749    6988 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token v7bkxo.pzxgmh7iiytdovwq \
	I0416 17:57:27.886749    6988 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c 
	I0416 17:57:27.886749    6988 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c 
	I0416 17:57:27.886749    6988 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 17:57:27.887747    6988 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 17:57:27.887747    6988 cni.go:84] Creating CNI manager for ""
	I0416 17:57:27.887747    6988 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0416 17:57:27.888782    6988 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0416 17:57:27.898776    6988 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0416 17:57:27.906367    6988 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0416 17:57:27.906367    6988 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0416 17:57:27.906446    6988 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0416 17:57:27.906446    6988 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0416 17:57:27.906446    6988 command_runner.go:130] > Access: 2024-04-16 17:55:43.845708000 +0000
	I0416 17:57:27.906446    6988 command_runner.go:130] > Modify: 2024-04-16 08:43:32.000000000 +0000
	I0416 17:57:27.906446    6988 command_runner.go:130] > Change: 2024-04-16 17:55:34.250000000 +0000
	I0416 17:57:27.906446    6988 command_runner.go:130] >  Birth: -
	I0416 17:57:27.906446    6988 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0416 17:57:27.906446    6988 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0416 17:57:27.988519    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0416 17:57:28.490851    6988 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0416 17:57:28.498847    6988 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0416 17:57:28.511858    6988 command_runner.go:130] > serviceaccount/kindnet created
	I0416 17:57:28.523843    6988 command_runner.go:130] > daemonset.apps/kindnet created
	I0416 17:57:28.526917    6988 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 17:57:28.536843    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:28.538723    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-945500 minikube.k8s.io/updated_at=2024_04_16T17_57_28_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4 minikube.k8s.io/name=multinode-945500 minikube.k8s.io/primary=true
	I0416 17:57:28.553542    6988 command_runner.go:130] > -16
	I0416 17:57:28.553542    6988 ops.go:34] apiserver oom_adj: -16
	I0416 17:57:28.663066    6988 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0416 17:57:28.672472    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:28.703696    6988 command_runner.go:130] > node/multinode-945500 labeled
	I0416 17:57:28.779726    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:29.176642    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:29.310699    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:29.688820    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:29.783095    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:30.180137    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:30.283623    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:30.677902    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:30.770542    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:31.173788    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:31.267177    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:31.681339    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:31.776737    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:32.179098    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:32.275419    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:32.685593    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:32.784034    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:33.184934    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:33.284755    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:33.689894    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:33.786322    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:34.177543    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:34.278089    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:34.688074    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:34.788843    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:35.176613    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:35.278146    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:35.690652    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:35.790109    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:36.185543    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:36.283203    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:36.685087    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:36.787681    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:37.183826    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:37.287103    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:37.686779    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:37.790505    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:38.186663    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:38.313330    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:38.690145    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:38.792194    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:39.188096    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:39.307296    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:39.673049    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:39.777746    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:40.175109    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:40.317376    6988 command_runner.go:130] > NAME      SECRETS   AGE
	I0416 17:57:40.317525    6988 command_runner.go:130] > default   0         0s
	I0416 17:57:40.317525    6988 kubeadm.go:1107] duration metric: took 11.7899387s to wait for elevateKubeSystemPrivileges
	W0416 17:57:40.317725    6988 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 17:57:40.317725    6988 kubeadm.go:393] duration metric: took 24.7420862s to StartCluster
	I0416 17:57:40.317841    6988 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:40.318068    6988 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 17:57:40.320080    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:40.321302    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0416 17:57:40.321470    6988 start.go:234] Will wait 6m0s for node &{Name: IP:172.19.91.227 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 17:57:40.321470    6988 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 17:57:40.321614    6988 addons.go:69] Setting storage-provisioner=true in profile "multinode-945500"
	I0416 17:57:40.321614    6988 addons.go:234] Setting addon storage-provisioner=true in "multinode-945500"
	I0416 17:57:40.321614    6988 host.go:66] Checking if "multinode-945500" exists ...
	I0416 17:57:40.321614    6988 addons.go:69] Setting default-storageclass=true in profile "multinode-945500"
	I0416 17:57:40.321614    6988 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-945500"
	I0416 17:57:40.321614    6988 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 17:57:40.322690    6988 out.go:177] * Verifying Kubernetes components...
	I0416 17:57:40.322606    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:57:40.322690    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:57:40.336146    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:57:40.543940    6988 command_runner.go:130] > apiVersion: v1
	I0416 17:57:40.544012    6988 command_runner.go:130] > data:
	I0416 17:57:40.544012    6988 command_runner.go:130] >   Corefile: |
	I0416 17:57:40.544012    6988 command_runner.go:130] >     .:53 {
	I0416 17:57:40.544012    6988 command_runner.go:130] >         errors
	I0416 17:57:40.544012    6988 command_runner.go:130] >         health {
	I0416 17:57:40.544088    6988 command_runner.go:130] >            lameduck 5s
	I0416 17:57:40.544088    6988 command_runner.go:130] >         }
	I0416 17:57:40.544088    6988 command_runner.go:130] >         ready
	I0416 17:57:40.544112    6988 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0416 17:57:40.544112    6988 command_runner.go:130] >            pods insecure
	I0416 17:57:40.544112    6988 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0416 17:57:40.544112    6988 command_runner.go:130] >            ttl 30
	I0416 17:57:40.544112    6988 command_runner.go:130] >         }
	I0416 17:57:40.544112    6988 command_runner.go:130] >         prometheus :9153
	I0416 17:57:40.544112    6988 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0416 17:57:40.544191    6988 command_runner.go:130] >            max_concurrent 1000
	I0416 17:57:40.544191    6988 command_runner.go:130] >         }
	I0416 17:57:40.544191    6988 command_runner.go:130] >         cache 30
	I0416 17:57:40.544191    6988 command_runner.go:130] >         loop
	I0416 17:57:40.544191    6988 command_runner.go:130] >         reload
	I0416 17:57:40.544191    6988 command_runner.go:130] >         loadbalance
	I0416 17:57:40.544191    6988 command_runner.go:130] >     }
	I0416 17:57:40.544191    6988 command_runner.go:130] > kind: ConfigMap
	I0416 17:57:40.544191    6988 command_runner.go:130] > metadata:
	I0416 17:57:40.544191    6988 command_runner.go:130] >   creationTimestamp: "2024-04-16T17:57:27Z"
	I0416 17:57:40.544191    6988 command_runner.go:130] >   name: coredns
	I0416 17:57:40.544191    6988 command_runner.go:130] >   namespace: kube-system
	I0416 17:57:40.544296    6988 command_runner.go:130] >   resourceVersion: "274"
	I0416 17:57:40.544296    6988 command_runner.go:130] >   uid: 8b9b71a6-9315-41d9-b055-6f10c4c901fd
	I0416 17:57:40.544483    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.80.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0416 17:57:40.652097    6988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 17:57:40.902041    6988 command_runner.go:130] > configmap/coredns replaced
	I0416 17:57:40.905269    6988 start.go:946] {"host.minikube.internal": 172.19.80.1} host record injected into CoreDNS's ConfigMap
	I0416 17:57:40.906408    6988 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 17:57:40.906594    6988 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 17:57:40.907054    6988 kapi.go:59] client config for multinode-945500: &rest.Config{Host:"https://172.19.91.227:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef16c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 17:57:40.907195    6988 kapi.go:59] client config for multinode-945500: &rest.Config{Host:"https://172.19.91.227:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef16c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 17:57:40.908042    6988 cert_rotation.go:137] Starting client certificate rotation controller
	I0416 17:57:40.908659    6988 node_ready.go:35] waiting up to 6m0s for node "multinode-945500" to be "Ready" ...
	I0416 17:57:40.908860    6988 round_trippers.go:463] GET https://172.19.91.227:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0416 17:57:40.908860    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:40.908860    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:40.908860    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:40.908860    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:40.908860    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:40.908955    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:40.908955    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:40.937154    6988 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0416 17:57:40.937516    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:40.937516    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:40.937516    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:40.937516    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:40.937516    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:41 GMT
	I0416 17:57:40.937516    6988 round_trippers.go:580]     Audit-Id: e2e8d91f-cc17-4b2b-a543-43ca22e7c70f
	I0416 17:57:40.937516    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:40.937792    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:40.938405    6988 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I0416 17:57:40.938543    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:40.938543    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:40.938543    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:40.938543    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:40.938543    6988 round_trippers.go:580]     Content-Length: 291
	I0416 17:57:40.938662    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:41 GMT
	I0416 17:57:40.938662    6988 round_trippers.go:580]     Audit-Id: 9f1849c0-96cc-4587-8702-5be0aa8b035b
	I0416 17:57:40.938662    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:40.938662    6988 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"474cfa32-79eb-4bf1-81ff-b938f83eaa0d","resourceVersion":"383","creationTimestamp":"2024-04-16T17:57:27Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0416 17:57:40.939508    6988 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"474cfa32-79eb-4bf1-81ff-b938f83eaa0d","resourceVersion":"383","creationTimestamp":"2024-04-16T17:57:27Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0416 17:57:40.939654    6988 round_trippers.go:463] PUT https://172.19.91.227:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0416 17:57:40.939709    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:40.939709    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:40.939709    6988 round_trippers.go:473]     Content-Type: application/json
	I0416 17:57:40.939709    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:40.954484    6988 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0416 17:57:40.954484    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:40.954484    6988 round_trippers.go:580]     Content-Length: 291
	I0416 17:57:40.954484    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:41 GMT
	I0416 17:57:40.954484    6988 round_trippers.go:580]     Audit-Id: 33fbc171-b87c-4a8b-8b71-fb72b829abb0
	I0416 17:57:40.954484    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:40.954484    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:40.954484    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:40.954484    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:40.954484    6988 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"474cfa32-79eb-4bf1-81ff-b938f83eaa0d","resourceVersion":"385","creationTimestamp":"2024-04-16T17:57:27Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0416 17:57:41.416463    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:41.416653    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:41.416463    6988 round_trippers.go:463] GET https://172.19.91.227:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0416 17:57:41.416653    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:41.416653    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:41.416653    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:41.416739    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:41.416886    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:41.420106    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:41.420495    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:41.420495    6988 round_trippers.go:580]     Audit-Id: 0ef8009e-dcde-4e08-b2eb-b21c97c9713b
	I0416 17:57:41.420495    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:41.420495    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:41.420495    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:41.420495    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:41.420495    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:41 GMT
	I0416 17:57:41.420873    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:41.420873    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:41.420970    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:41.420970    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:41.420970    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:41.420970    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:41.420970    6988 round_trippers.go:580]     Content-Length: 291
	I0416 17:57:41.420970    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:41 GMT
	I0416 17:57:41.420970    6988 round_trippers.go:580]     Audit-Id: 876a0092-4e47-429b-acd8-759d477820ca
	I0416 17:57:41.421083    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:41.421155    6988 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"474cfa32-79eb-4bf1-81ff-b938f83eaa0d","resourceVersion":"395","creationTimestamp":"2024-04-16T17:57:27Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0416 17:57:41.421374    6988 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-945500" context rescaled to 1 replicas
	I0416 17:57:41.920343    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:41.920343    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:41.920343    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:41.920343    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:41.925445    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 17:57:41.925445    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:41.925445    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:42 GMT
	I0416 17:57:41.925445    6988 round_trippers.go:580]     Audit-Id: 7df7d5cd-8d90-47e3-a620-e333515b8855
	I0416 17:57:41.925445    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:41.925445    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:41.925445    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:41.925445    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:41.927690    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:42.389093    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:57:42.389178    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:42.389320    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:57:42.390035    6988 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:57:42.389320    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:42.390775    6988 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 17:57:42.390775    6988 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 17:57:42.390840    6988 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 17:57:42.390906    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:57:42.391435    6988 kapi.go:59] client config for multinode-945500: &rest.Config{Host:"https://172.19.91.227:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef16c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 17:57:42.392060    6988 addons.go:234] Setting addon default-storageclass=true in "multinode-945500"
	I0416 17:57:42.392151    6988 host.go:66] Checking if "multinode-945500" exists ...
	I0416 17:57:42.393041    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:57:42.412561    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:42.412743    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:42.412743    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:42.412743    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:42.419056    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 17:57:42.419366    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:42.419366    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:42.419366    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:42.419366    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:42.419366    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:42.419366    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:42 GMT
	I0416 17:57:42.419366    6988 round_trippers.go:580]     Audit-Id: b3f3bd38-d9b8-462a-9951-d6845f4c1e8b
	I0416 17:57:42.419606    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:42.919136    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:42.919136    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:42.919136    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:42.919136    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:42.922770    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:42.923481    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:42.923481    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:43 GMT
	I0416 17:57:42.923481    6988 round_trippers.go:580]     Audit-Id: 0619e710-cc23-453b-93b8-902006c18fd4
	I0416 17:57:42.923481    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:42.923481    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:42.923481    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:42.923481    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:42.924373    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:42.924671    6988 node_ready.go:53] node "multinode-945500" has status "Ready":"False"
	I0416 17:57:43.422289    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:43.422289    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:43.422289    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:43.422289    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:43.426297    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:43.426759    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:43.426855    6988 round_trippers.go:580]     Audit-Id: 3881c6f2-0168-43dd-afc5-e5828acf3c8d
	I0416 17:57:43.426855    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:43.426855    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:43.426855    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:43.426936    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:43.426936    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:43 GMT
	I0416 17:57:43.427005    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:43.912103    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:43.912103    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:43.912103    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:43.912103    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:43.915707    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:43.916753    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:43.916753    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:44 GMT
	I0416 17:57:43.916753    6988 round_trippers.go:580]     Audit-Id: 5c816ab6-0256-4da7-8677-2eed63915566
	I0416 17:57:43.916782    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:43.916782    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:43.916782    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:43.916782    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:43.917611    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:44.422232    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:44.422232    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:44.422232    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:44.422232    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:44.425983    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:44.426131    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:44.426131    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:44.426131    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:44 GMT
	I0416 17:57:44.426209    6988 round_trippers.go:580]     Audit-Id: 9338168a-3808-4f3d-8a58-744d48096dc5
	I0416 17:57:44.426209    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:44.426209    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:44.426209    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:44.426209    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:44.514747    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:57:44.514747    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:44.515754    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:57:44.517753    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:57:44.517753    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:44.517753    6988 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 17:57:44.517753    6988 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 17:57:44.517753    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:57:44.911211    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:44.911456    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:44.911456    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:44.911456    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:44.915270    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:44.915270    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:44.915270    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:44.915270    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:44.915270    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:44.915270    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:44.915270    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:45 GMT
	I0416 17:57:44.915270    6988 round_trippers.go:580]     Audit-Id: 4c85a024-69e3-42e3-8a96-0b4369f957e4
	I0416 17:57:44.916208    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:45.417189    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:45.417189    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:45.417189    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:45.417189    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:45.424768    6988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0416 17:57:45.424768    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:45.424768    6988 round_trippers.go:580]     Audit-Id: 0310038d-76b3-4992-9ac3-7533f23a7d71
	I0416 17:57:45.424768    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:45.424768    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:45.424768    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:45.424768    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:45.424768    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:45 GMT
	I0416 17:57:45.425371    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:45.425371    6988 node_ready.go:53] node "multinode-945500" has status "Ready":"False"
	I0416 17:57:45.923330    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:45.923330    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:45.923330    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:45.923330    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:45.925920    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:45.925920    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:45.926718    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:45.926718    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:45.926718    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:46 GMT
	I0416 17:57:45.926718    6988 round_trippers.go:580]     Audit-Id: 97c2ee9c-f0ff-43e0-b2a8-48327b90a95f
	I0416 17:57:45.926718    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:45.926718    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:45.927203    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:46.418033    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:46.418033    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:46.418033    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:46.418033    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:46.501786    6988 round_trippers.go:574] Response Status: 200 OK in 83 milliseconds
	I0416 17:57:46.501786    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:46.501786    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:46 GMT
	I0416 17:57:46.501786    6988 round_trippers.go:580]     Audit-Id: 7df6f9f0-10ff-4db8-bfad-3fc7f1364386
	I0416 17:57:46.501786    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:46.501786    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:46.501905    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:46.501905    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:46.503216    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:46.635075    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:57:46.635075    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:46.635935    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:57:46.921581    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:46.921653    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:46.921653    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:46.921720    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:46.924533    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:46.924533    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:46.924758    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:46.924758    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:46.924758    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:46.924758    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:46.924758    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:47 GMT
	I0416 17:57:46.924758    6988 round_trippers.go:580]     Audit-Id: e78831c8-f850-4752-a899-e59b21c78198
	I0416 17:57:46.924832    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:46.982609    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:57:46.982609    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:46.982609    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 17:57:47.140657    6988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 17:57:47.423704    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:47.423704    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:47.423704    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:47.423704    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:47.427881    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:47.428047    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:47.428047    6988 round_trippers.go:580]     Audit-Id: 23292552-c2df-4084-b58f-d36e231163f8
	I0416 17:57:47.428047    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:47.428047    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:47.428047    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:47.428047    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:47.428047    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:47 GMT
	I0416 17:57:47.428436    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:47.428909    6988 node_ready.go:53] node "multinode-945500" has status "Ready":"False"
	I0416 17:57:47.642156    6988 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0416 17:57:47.642156    6988 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0416 17:57:47.642263    6988 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0416 17:57:47.642263    6988 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0416 17:57:47.642263    6988 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0416 17:57:47.642352    6988 command_runner.go:130] > pod/storage-provisioner created
	I0416 17:57:47.915174    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:47.915174    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:47.915174    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:47.915174    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:47.919802    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:47.919802    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:47.919802    6988 round_trippers.go:580]     Audit-Id: 695031a3-c73c-4762-a80a-ead4be6d3a90
	I0416 17:57:47.919802    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:47.919802    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:47.919802    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:47.919802    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:47.919802    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:48 GMT
	I0416 17:57:47.921798    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:48.424055    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:48.424122    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:48.424122    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:48.424122    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:48.427517    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:48.427517    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:48.427517    6988 round_trippers.go:580]     Audit-Id: 7545d9c7-2c95-4fab-863b-976fb672f07e
	I0416 17:57:48.427517    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:48.427517    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:48.427517    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:48.427517    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:48.427517    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:48 GMT
	I0416 17:57:48.428336    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:48.912182    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:48.912285    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:48.912285    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:48.912285    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:48.915718    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:48.915718    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:48.915718    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:48.915718    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:48.915718    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:49 GMT
	I0416 17:57:48.915718    6988 round_trippers.go:580]     Audit-Id: 2263b32c-d20d-46cd-879e-9105b86a7194
	I0416 17:57:48.915718    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:48.915718    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:48.916253    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:49.012275    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:57:49.012444    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:49.012783    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 17:57:49.142232    6988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 17:57:49.275828    6988 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0416 17:57:49.276194    6988 round_trippers.go:463] GET https://172.19.91.227:8443/apis/storage.k8s.io/v1/storageclasses
	I0416 17:57:49.276271    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:49.276271    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:49.276381    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:49.279132    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:49.279132    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:49.279132    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:49.279132    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:49.279132    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:49.279132    6988 round_trippers.go:580]     Content-Length: 1273
	I0416 17:57:49.279132    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:49 GMT
	I0416 17:57:49.279397    6988 round_trippers.go:580]     Audit-Id: b06ff280-6eac-43c1-91fe-e3ebbad21f66
	I0416 17:57:49.279397    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:49.279397    6988 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"standard","uid":"fad243f1-4905-48ae-985d-d89cda0607a0","resourceVersion":"419","creationTimestamp":"2024-04-16T17:57:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-16T17:57:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0416 17:57:49.279545    6988 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"fad243f1-4905-48ae-985d-d89cda0607a0","resourceVersion":"419","creationTimestamp":"2024-04-16T17:57:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-16T17:57:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0416 17:57:49.279545    6988 round_trippers.go:463] PUT https://172.19.91.227:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0416 17:57:49.280079    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:49.280079    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:49.280079    6988 round_trippers.go:473]     Content-Type: application/json
	I0416 17:57:49.280122    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:49.283131    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:49.283131    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:49.283131    6988 round_trippers.go:580]     Audit-Id: 58e327bf-d681-4c51-8630-376535cfdae0
	I0416 17:57:49.283131    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:49.283131    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:49.283131    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:49.283131    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:49.283131    6988 round_trippers.go:580]     Content-Length: 1220
	I0416 17:57:49.283131    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:49 GMT
	I0416 17:57:49.283131    6988 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"fad243f1-4905-48ae-985d-d89cda0607a0","resourceVersion":"419","creationTimestamp":"2024-04-16T17:57:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-16T17:57:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0416 17:57:49.284142    6988 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0416 17:57:49.285110    6988 addons.go:505] duration metric: took 8.9631309s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0416 17:57:49.413824    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:49.413824    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:49.413824    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:49.413824    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:49.420066    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 17:57:49.420066    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:49.420066    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:49 GMT
	I0416 17:57:49.420066    6988 round_trippers.go:580]     Audit-Id: 673fcfb7-e79c-42ba-abaf-e828c3df7a7a
	I0416 17:57:49.420066    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:49.420066    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:49.420066    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:49.420066    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:49.420066    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:49.915557    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:49.915632    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:49.915632    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:49.915632    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:49.920023    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:49.920023    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:49.920023    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:49.920023    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:49.920023    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:49.920023    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:50 GMT
	I0416 17:57:49.920023    6988 round_trippers.go:580]     Audit-Id: cb813c2c-6bb9-41d0-a192-81d5df39cc31
	I0416 17:57:49.920023    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:49.920752    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:49.920881    6988 node_ready.go:53] node "multinode-945500" has status "Ready":"False"
	I0416 17:57:50.414309    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:50.414309    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:50.414309    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:50.414309    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:50.421246    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 17:57:50.421246    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:50.421246    6988 round_trippers.go:580]     Audit-Id: 9a47d54e-a489-4e7c-8e6e-1768c6e24a06
	I0416 17:57:50.421246    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:50.421246    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:50.421246    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:50.421246    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:50.421246    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:50 GMT
	I0416 17:57:50.421586    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:50.422041    6988 node_ready.go:49] node "multinode-945500" has status "Ready":"True"
	I0416 17:57:50.422127    6988 node_ready.go:38] duration metric: took 9.5128501s for node "multinode-945500" to be "Ready" ...
	I0416 17:57:50.422127    6988 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 17:57:50.422288    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods
	I0416 17:57:50.422288    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:50.422288    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:50.422352    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:50.426293    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:50.426293    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:50.426293    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:50 GMT
	I0416 17:57:50.426293    6988 round_trippers.go:580]     Audit-Id: 13196519-ea29-4856-beaa-5c943f886806
	I0416 17:57:50.426293    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:50.426293    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:50.426645    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:50.426645    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:50.427551    6988 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"427","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56336 chars]
	I0416 17:57:50.432315    6988 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-86z7h" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:50.432315    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 17:57:50.432315    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:50.432315    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:50.432315    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:50.435446    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:50.435446    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:50.435446    6988 round_trippers.go:580]     Audit-Id: 0da838d3-4490-46a7-8d52-0929abb29d06
	I0416 17:57:50.435446    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:50.435446    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:50.435446    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:50.435446    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:50.435446    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:50 GMT
	I0416 17:57:50.435667    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"427","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0416 17:57:50.436341    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:50.436417    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:50.436417    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:50.436417    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:50.441670    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 17:57:50.441670    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:50.441670    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:50 GMT
	I0416 17:57:50.441670    6988 round_trippers.go:580]     Audit-Id: 7f63ee25-4ff7-418f-b7b2-b71003d58b29
	I0416 17:57:50.441670    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:50.441670    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:50.441670    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:50.441670    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:50.441670    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:50.933620    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 17:57:50.933620    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:50.933620    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:50.933620    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:50.936638    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:50.936638    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:50.937401    6988 round_trippers.go:580]     Audit-Id: 61428305-720d-4f2d-9189-d4c9892ef7e3
	I0416 17:57:50.937401    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:50.937401    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:50.937401    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:50.937401    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:50.937401    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:51 GMT
	I0416 17:57:50.937680    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"427","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0416 17:57:50.938372    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:50.938438    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:50.938438    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:50.938438    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:50.940646    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:50.940646    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:50.940646    6988 round_trippers.go:580]     Audit-Id: 62d4cd2d-a2dc-447d-8fe8-0ab2e8469374
	I0416 17:57:50.940646    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:50.940646    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:50.940646    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:50.940646    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:50.940646    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:51 GMT
	I0416 17:57:50.941893    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:51.436888    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 17:57:51.436973    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:51.437057    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:51.437057    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:51.440468    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:51.440468    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:51.440468    6988 round_trippers.go:580]     Audit-Id: 854d513c-8ed8-40d2-a6f4-c3ce631c5044
	I0416 17:57:51.440468    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:51.440468    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:51.440468    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:51.440468    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:51.440468    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:51 GMT
	I0416 17:57:51.441473    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"427","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0416 17:57:51.442446    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:51.442513    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:51.442513    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:51.442513    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:51.448074    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 17:57:51.448074    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:51.448074    6988 round_trippers.go:580]     Audit-Id: ea821fd7-5bb9-4fc8-adab-1d7de329d33c
	I0416 17:57:51.448074    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:51.448074    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:51.448074    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:51.448074    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:51.448074    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:51 GMT
	I0416 17:57:51.448761    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:51.936346    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 17:57:51.936438    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:51.936438    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:51.936438    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:51.940774    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:51.940774    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:51.940774    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:51.940774    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:51.940774    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:51.940774    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:51.940774    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:51.940774    6988 round_trippers.go:580]     Audit-Id: 39edef38-eddb-4269-abe8-a908e1d21987
	I0416 17:57:51.941262    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"427","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0416 17:57:51.941999    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:51.942068    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:51.942068    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:51.942068    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:51.944728    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:51.944728    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:51.944728    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:51.944728    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:51.945637    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:51.945637    6988 round_trippers.go:580]     Audit-Id: e9f648f9-92bc-4242-8c2c-17b661038154
	I0416 17:57:51.945637    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:51.945637    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:51.945961    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:52.434152    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 17:57:52.434152    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.434152    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.434152    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.438737    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:52.438737    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.438905    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.438905    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.438905    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.438905    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.438905    6988 round_trippers.go:580]     Audit-Id: 64fc4c09-2c08-4c20-886d-b65cc89badc2
	I0416 17:57:52.438905    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.439311    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"441","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0416 17:57:52.440372    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:52.440372    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.440471    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.440471    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.442800    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:52.442800    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.442800    6988 round_trippers.go:580]     Audit-Id: 69a074dd-0323-4dfd-a4d9-2a31cf93ae57
	I0416 17:57:52.442800    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.442800    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.442800    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.442800    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.442800    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.443974    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:52.444376    6988 pod_ready.go:92] pod "coredns-76f75df574-86z7h" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:52.444463    6988 pod_ready.go:81] duration metric: took 2.0119463s for pod "coredns-76f75df574-86z7h" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.444463    6988 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.444559    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-945500
	I0416 17:57:52.444559    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.444559    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.444559    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.448264    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:52.448675    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.448709    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.448709    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.448709    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.448709    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.448709    6988 round_trippers.go:580]     Audit-Id: 6a1f3697-4191-47e0-93ea-8556479112b5
	I0416 17:57:52.448709    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.448895    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-945500","namespace":"kube-system","uid":"245cef70-3506-471b-9bf6-dd14a2c23d8c","resourceVersion":"372","creationTimestamp":"2024-04-16T17:57:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.91.227:2379","kubernetes.io/config.hash":"c735a32dacf9631b2b4787fe99cff316","kubernetes.io/config.mirror":"c735a32dacf9631b2b4787fe99cff316","kubernetes.io/config.seen":"2024-04-16T17:57:28.101466445Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0416 17:57:52.449544    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:52.449618    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.449618    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.449618    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.457774    6988 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0416 17:57:52.457774    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.457774    6988 round_trippers.go:580]     Audit-Id: 6aa9935f-5cde-4c2d-90c1-770e6d9b42ec
	I0416 17:57:52.457774    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.457774    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.457774    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.457774    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.457774    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.457774    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:52.457774    6988 pod_ready.go:92] pod "etcd-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:52.457774    6988 pod_ready.go:81] duration metric: took 13.3102ms for pod "etcd-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.458783    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.458817    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-945500
	I0416 17:57:52.458817    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.458817    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.458817    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.462379    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:52.462379    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.462379    6988 round_trippers.go:580]     Audit-Id: 3d6fa3f7-ff7f-4322-a2e8-b5a0c4fb1daf
	I0416 17:57:52.462379    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.462379    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.462379    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.462379    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.462379    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.462379    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-945500","namespace":"kube-system","uid":"c6ae969a-de5d-4c7e-af09-b1a5eb21f2ab","resourceVersion":"314","creationTimestamp":"2024-04-16T17:57:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.91.227:8443","kubernetes.io/config.hash":"564fae5a3e5851c815d6092b123a5395","kubernetes.io/config.mirror":"564fae5a3e5851c815d6092b123a5395","kubernetes.io/config.seen":"2024-04-16T17:57:28.101471746Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0416 17:57:52.464244    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:52.464374    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.464374    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.464374    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.466690    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:52.466690    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.466690    6988 round_trippers.go:580]     Audit-Id: d3396616-a825-4d83-94f7-1691134d1559
	I0416 17:57:52.466690    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.466690    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.466690    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.466690    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.466690    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.467128    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:52.467128    6988 pod_ready.go:92] pod "kube-apiserver-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:52.467128    6988 pod_ready.go:81] duration metric: took 8.3444ms for pod "kube-apiserver-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.467128    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.467128    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-945500
	I0416 17:57:52.467655    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.467655    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.467655    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.469965    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:52.469965    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.469965    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.469965    6988 round_trippers.go:580]     Audit-Id: 69b40722-0130-4c39-98a1-4a3e7990d75a
	I0416 17:57:52.469965    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.469965    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.469965    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.469965    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.469965    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-945500","namespace":"kube-system","uid":"01b937c2-9827-4240-83f0-3536fec5eb5e","resourceVersion":"345","creationTimestamp":"2024-04-16T17:57:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5db71de2029227779432bddd337fc81d","kubernetes.io/config.mirror":"5db71de2029227779432bddd337fc81d","kubernetes.io/config.seen":"2024-04-16T17:57:28.101473146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0416 17:57:52.471692    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:52.471736    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.471736    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.471736    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.474312    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:52.474312    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.474312    6988 round_trippers.go:580]     Audit-Id: ef6911fd-c5b9-4c1a-85d8-6d4810547589
	I0416 17:57:52.474312    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.474312    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.474312    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.474312    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.474312    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.474842    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:52.475259    6988 pod_ready.go:92] pod "kube-controller-manager-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:52.475298    6988 pod_ready.go:81] duration metric: took 8.1314ms for pod "kube-controller-manager-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.475298    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rfxsg" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.475372    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfxsg
	I0416 17:57:52.475407    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.475446    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.475446    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.480328    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:52.480328    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.480328    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.480328    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.480328    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.480328    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.480328    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.480328    6988 round_trippers.go:580]     Audit-Id: 5505b192-812e-4b7d-b573-cc48b255735a
	I0416 17:57:52.480328    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rfxsg","generateName":"kube-proxy-","namespace":"kube-system","uid":"b740e6e0-4768-4dd4-a958-307662a92578","resourceVersion":"401","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"83f1bde2-7175-4a0f-944e-61200d7e7177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83f1bde2-7175-4a0f-944e-61200d7e7177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0416 17:57:52.480969    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:52.480969    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.480969    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.480969    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.484123    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:52.484123    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.484123    6988 round_trippers.go:580]     Audit-Id: 242d2743-3177-42b4-9e74-5bce35db3f1d
	I0416 17:57:52.484123    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.484123    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.484123    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.484123    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.484123    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.484955    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:52.485557    6988 pod_ready.go:92] pod "kube-proxy-rfxsg" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:52.485602    6988 pod_ready.go:81] duration metric: took 10.2584ms for pod "kube-proxy-rfxsg" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.485602    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.638123    6988 request.go:629] Waited for 152.4159ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-945500
	I0416 17:57:52.638123    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-945500
	I0416 17:57:52.638123    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.638123    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.638123    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.642880    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:52.642880    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.642880    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.642880    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.642880    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.642880    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.642880    6988 round_trippers.go:580]     Audit-Id: 8f2e930a-7531-48ab-83eb-71103cec3dde
	I0416 17:57:52.642880    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.642880    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-945500","namespace":"kube-system","uid":"a09e52e8-1ac2-4c22-8a3d-57969fae85a9","resourceVersion":"310","creationTimestamp":"2024-04-16T17:57:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4ebc73a23d79d1dece7469fd94c931d1","kubernetes.io/config.mirror":"4ebc73a23d79d1dece7469fd94c931d1","kubernetes.io/config.seen":"2024-04-16T17:57:20.694761708Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0416 17:57:52.840231    6988 request.go:629] Waited for 196.2271ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:52.840540    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:52.840540    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.840640    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.840640    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.845870    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 17:57:52.845870    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.845870    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.845870    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.845870    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.845870    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.845870    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:53 GMT
	I0416 17:57:52.845870    6988 round_trippers.go:580]     Audit-Id: 05acaca5-b7c1-4fab-9ace-d775a055e4f5
	I0416 17:57:52.846425    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:52.846879    6988 pod_ready.go:92] pod "kube-scheduler-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:52.846957    6988 pod_ready.go:81] duration metric: took 361.3343ms for pod "kube-scheduler-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.846957    6988 pod_ready.go:38] duration metric: took 2.4246918s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 17:57:52.846957    6988 api_server.go:52] waiting for apiserver process to appear ...
	I0416 17:57:52.859063    6988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:57:52.885312    6988 command_runner.go:130] > 2058
	I0416 17:57:52.885400    6988 api_server.go:72] duration metric: took 12.562985s to wait for apiserver process to appear ...
	I0416 17:57:52.885400    6988 api_server.go:88] waiting for apiserver healthz status ...
	I0416 17:57:52.885400    6988 api_server.go:253] Checking apiserver healthz at https://172.19.91.227:8443/healthz ...
	I0416 17:57:52.898178    6988 api_server.go:279] https://172.19.91.227:8443/healthz returned 200:
	ok
	I0416 17:57:52.898356    6988 round_trippers.go:463] GET https://172.19.91.227:8443/version
	I0416 17:57:52.898430    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.898430    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.898463    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.900671    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:52.900731    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.900731    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.900731    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.900731    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.900731    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.900731    6988 round_trippers.go:580]     Content-Length: 263
	I0416 17:57:52.900731    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:53 GMT
	I0416 17:57:52.900731    6988 round_trippers.go:580]     Audit-Id: 23327aeb-4415-44a9-ac4c-ac1fb850d1c4
	I0416 17:57:52.900731    6988 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "29",
	  "gitVersion": "v1.29.3",
	  "gitCommit": "6813625b7cd706db5bc7388921be03071e1a492d",
	  "gitTreeState": "clean",
	  "buildDate": "2024-03-14T23:58:36Z",
	  "goVersion": "go1.21.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0416 17:57:52.900731    6988 api_server.go:141] control plane version: v1.29.3
	I0416 17:57:52.900731    6988 api_server.go:131] duration metric: took 15.3302ms to wait for apiserver health ...
	I0416 17:57:52.900731    6988 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 17:57:53.042203    6988 request.go:629] Waited for 141.464ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods
	I0416 17:57:53.042203    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods
	I0416 17:57:53.042203    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:53.042203    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:53.042203    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:53.047811    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:53.047811    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:53.047931    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:53.047931    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:53.047931    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:53.047931    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:53.047931    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:53 GMT
	I0416 17:57:53.047931    6988 round_trippers.go:580]     Audit-Id: 0112d2ef-1059-4960-9329-11966d09c0ed
	I0416 17:57:53.050025    6988 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"445"},"items":[{"metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"441","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56450 chars]
	I0416 17:57:53.056232    6988 system_pods.go:59] 8 kube-system pods found
	I0416 17:57:53.056303    6988 system_pods.go:61] "coredns-76f75df574-86z7h" [1ca004a0-0575-4576-a5ed-ba0891a7d277] Running
	I0416 17:57:53.056303    6988 system_pods.go:61] "etcd-multinode-945500" [245cef70-3506-471b-9bf6-dd14a2c23d8c] Running
	I0416 17:57:53.056303    6988 system_pods.go:61] "kindnet-tp7jl" [91595b62-10ee-47cb-a0c9-2ca83ad70af7] Running
	I0416 17:57:53.056303    6988 system_pods.go:61] "kube-apiserver-multinode-945500" [c6ae969a-de5d-4c7e-af09-b1a5eb21f2ab] Running
	I0416 17:57:53.056303    6988 system_pods.go:61] "kube-controller-manager-multinode-945500" [01b937c2-9827-4240-83f0-3536fec5eb5e] Running
	I0416 17:57:53.056303    6988 system_pods.go:61] "kube-proxy-rfxsg" [b740e6e0-4768-4dd4-a958-307662a92578] Running
	I0416 17:57:53.056303    6988 system_pods.go:61] "kube-scheduler-multinode-945500" [a09e52e8-1ac2-4c22-8a3d-57969fae85a9] Running
	I0416 17:57:53.056303    6988 system_pods.go:61] "storage-provisioner" [3bd5cc95-eef6-473e-b6f9-898568046f1b] Running
	I0416 17:57:53.056378    6988 system_pods.go:74] duration metric: took 155.5639ms to wait for pod list to return data ...
	I0416 17:57:53.056378    6988 default_sa.go:34] waiting for default service account to be created ...
	I0416 17:57:53.242714    6988 request.go:629] Waited for 186.2414ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/namespaces/default/serviceaccounts
	I0416 17:57:53.242956    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/default/serviceaccounts
	I0416 17:57:53.242956    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:53.243091    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:53.243091    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:53.246460    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:53.246460    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:53.246962    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:53.246962    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:53.246962    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:53.246962    6988 round_trippers.go:580]     Content-Length: 261
	I0416 17:57:53.246962    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:53 GMT
	I0416 17:57:53.246962    6988 round_trippers.go:580]     Audit-Id: da3e035a-782e-4d26-b641-e9ec06113208
	I0416 17:57:53.246962    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:53.247049    6988 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"26260d2a-9800-4f2e-87ba-a34049d52e3f","resourceVersion":"332","creationTimestamp":"2024-04-16T17:57:40Z"}}]}
	I0416 17:57:53.247481    6988 default_sa.go:45] found service account: "default"
	I0416 17:57:53.247563    6988 default_sa.go:55] duration metric: took 191.174ms for default service account to be created ...
	I0416 17:57:53.247563    6988 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 17:57:53.445373    6988 request.go:629] Waited for 197.6083ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods
	I0416 17:57:53.445373    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods
	I0416 17:57:53.445373    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:53.445373    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:53.445373    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:53.453613    6988 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0416 17:57:53.453613    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:53.453613    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:53.453613    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:53.453613    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:53.453613    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:53.453613    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:53 GMT
	I0416 17:57:53.453613    6988 round_trippers.go:580]     Audit-Id: a54cbc48-ccbf-4ab0-b75f-121f6c3ab39c
	I0416 17:57:53.454598    6988 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"441","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56450 chars]
	I0416 17:57:53.457215    6988 system_pods.go:86] 8 kube-system pods found
	I0416 17:57:53.457215    6988 system_pods.go:89] "coredns-76f75df574-86z7h" [1ca004a0-0575-4576-a5ed-ba0891a7d277] Running
	I0416 17:57:53.457215    6988 system_pods.go:89] "etcd-multinode-945500" [245cef70-3506-471b-9bf6-dd14a2c23d8c] Running
	I0416 17:57:53.457215    6988 system_pods.go:89] "kindnet-tp7jl" [91595b62-10ee-47cb-a0c9-2ca83ad70af7] Running
	I0416 17:57:53.457215    6988 system_pods.go:89] "kube-apiserver-multinode-945500" [c6ae969a-de5d-4c7e-af09-b1a5eb21f2ab] Running
	I0416 17:57:53.457215    6988 system_pods.go:89] "kube-controller-manager-multinode-945500" [01b937c2-9827-4240-83f0-3536fec5eb5e] Running
	I0416 17:57:53.457215    6988 system_pods.go:89] "kube-proxy-rfxsg" [b740e6e0-4768-4dd4-a958-307662a92578] Running
	I0416 17:57:53.457215    6988 system_pods.go:89] "kube-scheduler-multinode-945500" [a09e52e8-1ac2-4c22-8a3d-57969fae85a9] Running
	I0416 17:57:53.457215    6988 system_pods.go:89] "storage-provisioner" [3bd5cc95-eef6-473e-b6f9-898568046f1b] Running
	I0416 17:57:53.457215    6988 system_pods.go:126] duration metric: took 209.6402ms to wait for k8s-apps to be running ...
	I0416 17:57:53.457215    6988 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 17:57:53.465993    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 17:57:53.490843    6988 system_svc.go:56] duration metric: took 32.799ms WaitForService to wait for kubelet
	I0416 17:57:53.490843    6988 kubeadm.go:576] duration metric: took 13.1684808s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 17:57:53.490945    6988 node_conditions.go:102] verifying NodePressure condition ...
	I0416 17:57:53.646796    6988 request.go:629] Waited for 155.5885ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/nodes
	I0416 17:57:53.647092    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes
	I0416 17:57:53.647092    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:53.647092    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:53.647092    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:53.650750    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:53.650750    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:53.650750    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:53.650750    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:53.650750    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:53 GMT
	I0416 17:57:53.651249    6988 round_trippers.go:580]     Audit-Id: a39fa908-8f98-49bc-a6db-1564faa14911
	I0416 17:57:53.651249    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:53.651249    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:53.651424    6988 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"mana
gedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1" [truncated 4843 chars]
	I0416 17:57:53.651922    6988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 17:57:53.651922    6988 node_conditions.go:123] node cpu capacity is 2
	I0416 17:57:53.651922    6988 node_conditions.go:105] duration metric: took 160.9684ms to run NodePressure ...
	I0416 17:57:53.652035    6988 start.go:240] waiting for startup goroutines ...
	I0416 17:57:53.652035    6988 start.go:245] waiting for cluster config update ...
	I0416 17:57:53.652035    6988 start.go:254] writing updated cluster config ...
	I0416 17:57:53.653564    6988 out.go:177] 
	I0416 17:57:53.669380    6988 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 17:57:53.669380    6988 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 17:57:53.672905    6988 out.go:177] * Starting "multinode-945500-m02" worker node in "multinode-945500" cluster
	I0416 17:57:53.673088    6988 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 17:57:53.673617    6988 cache.go:56] Caching tarball of preloaded images
	I0416 17:57:53.673750    6988 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 17:57:53.673750    6988 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 17:57:53.674279    6988 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 17:57:53.682401    6988 start.go:360] acquireMachinesLock for multinode-945500-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 17:57:53.682401    6988 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-945500-m02"
	I0416 17:57:53.682989    6988 start.go:93] Provisioning new machine with config: &{Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.91.227 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0416 17:57:53.682989    6988 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0416 17:57:53.683581    6988 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 17:57:53.683581    6988 start.go:159] libmachine.API.Create for "multinode-945500" (driver="hyperv")
	I0416 17:57:53.683581    6988 client.go:168] LocalClient.Create starting
	I0416 17:57:53.684171    6988 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0416 17:57:53.684171    6988 main.go:141] libmachine: Decoding PEM data...
	I0416 17:57:53.684730    6988 main.go:141] libmachine: Parsing certificate...
	I0416 17:57:53.684763    6988 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0416 17:57:53.684763    6988 main.go:141] libmachine: Decoding PEM data...
	I0416 17:57:53.684763    6988 main.go:141] libmachine: Parsing certificate...
	I0416 17:57:53.684763    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0416 17:57:55.392368    6988 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0416 17:57:55.392368    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:55.393364    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0416 17:57:56.931487    6988 main.go:141] libmachine: [stdout =====>] : False
	
	I0416 17:57:56.931487    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:56.931487    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 17:57:58.272841    6988 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 17:57:58.273519    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:58.273519    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 17:58:01.537799    6988 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 17:58:01.537799    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:01.539609    6988 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 17:58:01.848885    6988 main.go:141] libmachine: Creating SSH key...
	I0416 17:58:02.010218    6988 main.go:141] libmachine: Creating VM...
	I0416 17:58:02.011217    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 17:58:04.625040    6988 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 17:58:04.625040    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:04.625917    6988 main.go:141] libmachine: Using switch "Default Switch"
	I0416 17:58:04.625917    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 17:58:06.258751    6988 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 17:58:06.258751    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:06.258751    6988 main.go:141] libmachine: Creating VHD
	I0416 17:58:06.258751    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0416 17:58:09.852420    6988 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : C09A8F8B-563A-41CF-AB1F-9B4C422F3FC9
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0416 17:58:09.852568    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:09.852568    6988 main.go:141] libmachine: Writing magic tar header
	I0416 17:58:09.852638    6988 main.go:141] libmachine: Writing SSH key tar header
	I0416 17:58:09.862039    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0416 17:58:12.878751    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:12.878751    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:12.878751    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\disk.vhd' -SizeBytes 20000MB
	I0416 17:58:15.237605    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:15.237605    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:15.237605    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-945500-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0416 17:58:18.410858    6988 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-945500-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0416 17:58:18.411873    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:18.411914    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-945500-m02 -DynamicMemoryEnabled $false
	I0416 17:58:20.486445    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:20.486524    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:20.486600    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-945500-m02 -Count 2
	I0416 17:58:22.474057    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:22.474057    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:22.474057    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-945500-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\boot2docker.iso'
	I0416 17:58:24.877959    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:24.877959    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:24.878134    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-945500-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\disk.vhd'
	I0416 17:58:27.308442    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:27.309253    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:27.309253    6988 main.go:141] libmachine: Starting VM...
	I0416 17:58:27.309346    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-945500-m02
	I0416 17:58:29.937973    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:29.937973    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:29.937973    6988 main.go:141] libmachine: Waiting for host to start...
	I0416 17:58:29.938140    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:58:32.040669    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:58:32.040669    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:32.040763    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:58:34.346849    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:34.346849    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:35.361237    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:58:37.380851    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:58:37.380851    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:37.381523    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:58:39.667097    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:39.667097    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:40.670143    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:58:42.688257    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:58:42.688257    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:42.688328    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:58:44.946196    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:44.946196    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:45.948919    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:58:47.976127    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:58:47.976127    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:47.976535    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:58:50.265300    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:50.265477    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:51.278063    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:58:53.353234    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:58:53.353234    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:53.353542    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:58:55.731097    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:58:55.731585    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:55.731648    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:58:57.706259    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:58:57.706259    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:57.706259    6988 machine.go:94] provisionDockerMachine start ...
	I0416 17:58:57.706337    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:58:59.674406    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:58:59.674406    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:59.675593    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:01.982982    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:01.982982    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:01.989231    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:59:02.000855    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.6 22 <nil> <nil>}
	I0416 17:59:02.000855    6988 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 17:59:02.131967    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 17:59:02.132116    6988 buildroot.go:166] provisioning hostname "multinode-945500-m02"
	I0416 17:59:02.132244    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:04.030355    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:04.031102    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:04.031102    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:06.380424    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:06.380424    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:06.385493    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:59:06.385574    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.6 22 <nil> <nil>}
	I0416 17:59:06.385574    6988 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-945500-m02 && echo "multinode-945500-m02" | sudo tee /etc/hostname
	I0416 17:59:06.536173    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-945500-m02
	
	I0416 17:59:06.536238    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:08.514008    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:08.514084    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:08.514108    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:10.867331    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:10.867331    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:10.872002    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:59:10.872167    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.6 22 <nil> <nil>}
	I0416 17:59:10.872167    6988 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-945500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-945500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-945500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 17:59:11.029689    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 17:59:11.029689    6988 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 17:59:11.029689    6988 buildroot.go:174] setting up certificates
	I0416 17:59:11.029689    6988 provision.go:84] configureAuth start
	I0416 17:59:11.029689    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:13.049800    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:13.050575    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:13.050646    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:15.359589    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:15.359589    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:15.359846    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:17.299020    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:17.299020    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:17.300075    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:19.605590    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:19.605590    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:19.605590    6988 provision.go:143] copyHostCerts
	I0416 17:59:19.605792    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 17:59:19.606057    6988 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 17:59:19.606057    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 17:59:19.606675    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 17:59:19.607815    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 17:59:19.608147    6988 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 17:59:19.608226    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 17:59:19.608494    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 17:59:19.609301    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 17:59:19.609365    6988 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 17:59:19.609365    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 17:59:19.609365    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 17:59:19.610613    6988 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-945500-m02 san=[127.0.0.1 172.19.91.6 localhost minikube multinode-945500-m02]
	I0416 17:59:19.702929    6988 provision.go:177] copyRemoteCerts
	I0416 17:59:19.710522    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 17:59:19.710522    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:21.626659    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:21.626659    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:21.627629    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:23.970899    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:23.970899    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:23.971221    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\id_rsa Username:docker}
	I0416 17:59:24.079459    6988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3686883s)
	I0416 17:59:24.079459    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 17:59:24.080474    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 17:59:24.123694    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 17:59:24.124179    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0416 17:59:24.164830    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 17:59:24.165649    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 17:59:24.208692    6988 provision.go:87] duration metric: took 13.1782183s to configureAuth
	I0416 17:59:24.208692    6988 buildroot.go:189] setting minikube options for container-runtime
	I0416 17:59:24.209067    6988 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 17:59:24.209160    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:26.153425    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:26.153425    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:26.153714    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:28.507518    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:28.507518    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:28.511037    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:59:28.511634    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.6 22 <nil> <nil>}
	I0416 17:59:28.511634    6988 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 17:59:28.639516    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 17:59:28.639516    6988 buildroot.go:70] root file system type: tmpfs
	I0416 17:59:28.639516    6988 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 17:59:28.639516    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:30.530854    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:30.531013    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:30.531013    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:32.826918    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:32.826918    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:32.832383    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:59:32.832984    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.6 22 <nil> <nil>}
	I0416 17:59:32.832984    6988 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.91.227"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 17:59:32.992600    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.91.227
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 17:59:32.992774    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:34.963694    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:34.963694    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:34.963799    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:37.247922    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:37.247922    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:37.252024    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:59:37.252024    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.6 22 <nil> <nil>}
	I0416 17:59:37.252024    6988 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 17:59:39.216273    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 17:59:39.216273    6988 machine.go:97] duration metric: took 41.5076568s to provisionDockerMachine
	I0416 17:59:39.216367    6988 client.go:171] duration metric: took 1m45.5267916s to LocalClient.Create
	I0416 17:59:39.216420    6988 start.go:167] duration metric: took 1m45.5268452s to libmachine.API.Create "multinode-945500"
	I0416 17:59:39.216420    6988 start.go:293] postStartSetup for "multinode-945500-m02" (driver="hyperv")
	I0416 17:59:39.216420    6988 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 17:59:39.225464    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 17:59:39.225464    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:41.131652    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:41.131652    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:41.132015    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:43.445904    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:43.445904    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:43.446473    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\id_rsa Username:docker}
	I0416 17:59:43.549649    6988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3239396s)
	I0416 17:59:43.558710    6988 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 17:59:43.563635    6988 command_runner.go:130] > NAME=Buildroot
	I0416 17:59:43.563635    6988 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0416 17:59:43.563635    6988 command_runner.go:130] > ID=buildroot
	I0416 17:59:43.563635    6988 command_runner.go:130] > VERSION_ID=2023.02.9
	I0416 17:59:43.563635    6988 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0416 17:59:43.563635    6988 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 17:59:43.563635    6988 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 17:59:43.565096    6988 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 17:59:43.566332    6988 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 17:59:43.566332    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 17:59:43.575822    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 17:59:43.593251    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 17:59:43.635050    6988 start.go:296] duration metric: took 4.4183786s for postStartSetup
	I0416 17:59:43.637173    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:45.591586    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:45.591586    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:45.591966    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:47.994749    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:47.994749    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:47.994889    6988 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 17:59:47.996574    6988 start.go:128] duration metric: took 1m54.3070064s to createHost
	I0416 17:59:47.996664    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:49.890109    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:49.890109    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:49.890628    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:52.220872    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:52.220872    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:52.225852    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:59:52.226248    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.6 22 <nil> <nil>}
	I0416 17:59:52.226248    6988 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 17:59:52.368040    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713290392.538512769
	
	I0416 17:59:52.368040    6988 fix.go:216] guest clock: 1713290392.538512769
	I0416 17:59:52.368040    6988 fix.go:229] Guest: 2024-04-16 17:59:52.538512769 +0000 UTC Remote: 2024-04-16 17:59:47.9965749 +0000 UTC m=+309.651339801 (delta=4.541937869s)
	I0416 17:59:52.368159    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:54.442418    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:54.442507    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:54.442581    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:56.760874    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:56.760874    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:56.765985    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:59:56.766627    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.6 22 <nil> <nil>}
	I0416 17:59:56.766627    6988 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713290392
	I0416 17:59:56.909969    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 17:59:52 UTC 2024
	
	I0416 17:59:56.909969    6988 fix.go:236] clock set: Tue Apr 16 17:59:52 UTC 2024
	 (err=<nil>)
	I0416 17:59:56.909969    6988 start.go:83] releasing machines lock for "multinode-945500-m02", held for 2m3.2205685s
	I0416 17:59:56.909969    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:58.843464    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:58.843464    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:58.843546    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:00:01.159738    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 18:00:01.160789    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:00:01.160917    6988 out.go:177] * Found network options:
	I0416 18:00:01.161771    6988 out.go:177]   - NO_PROXY=172.19.91.227
	W0416 18:00:01.162783    6988 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 18:00:01.163550    6988 out.go:177]   - NO_PROXY=172.19.91.227
	W0416 18:00:01.163820    6988 proxy.go:119] fail to check proxy env: Error ip not in block
	W0416 18:00:01.165081    6988 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 18:00:01.167381    6988 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 18:00:01.167483    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:00:01.178390    6988 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0416 18:00:01.178390    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:00:03.244075    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:00:03.244075    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:00:03.244075    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:00:03.244075    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:00:03.244356    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:00:03.244356    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:00:05.758057    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 18:00:05.758057    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:00:05.758057    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\id_rsa Username:docker}
	I0416 18:00:05.784117    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 18:00:05.784117    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:00:05.784117    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\id_rsa Username:docker}
	I0416 18:00:05.960484    6988 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0416 18:00:05.960638    6988 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.7929841s)
	I0416 18:00:05.960638    6988 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0416 18:00:05.960638    6988 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.781976s)
	W0416 18:00:05.960638    6988 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 18:00:05.975053    6988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 18:00:06.012668    6988 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0416 18:00:06.012756    6988 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 18:00:06.012756    6988 start.go:494] detecting cgroup driver to use...
	I0416 18:00:06.012756    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 18:00:06.050850    6988 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0416 18:00:06.061001    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 18:00:06.091844    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 18:00:06.110783    6988 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 18:00:06.118610    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 18:00:06.144577    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 18:00:06.171490    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 18:00:06.198550    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 18:00:06.226893    6988 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 18:00:06.255518    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 18:00:06.285057    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 18:00:06.314136    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 18:00:06.344453    6988 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 18:00:06.362440    6988 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0416 18:00:06.374326    6988 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 18:00:06.400901    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:00:06.587114    6988 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 18:00:06.621553    6988 start.go:494] detecting cgroup driver to use...
	I0416 18:00:06.630654    6988 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 18:00:06.656160    6988 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0416 18:00:06.656235    6988 command_runner.go:130] > [Unit]
	I0416 18:00:06.656235    6988 command_runner.go:130] > Description=Docker Application Container Engine
	I0416 18:00:06.656235    6988 command_runner.go:130] > Documentation=https://docs.docker.com
	I0416 18:00:06.656235    6988 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0416 18:00:06.656235    6988 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0416 18:00:06.656235    6988 command_runner.go:130] > StartLimitBurst=3
	I0416 18:00:06.656235    6988 command_runner.go:130] > StartLimitIntervalSec=60
	I0416 18:00:06.656235    6988 command_runner.go:130] > [Service]
	I0416 18:00:06.656235    6988 command_runner.go:130] > Type=notify
	I0416 18:00:06.656235    6988 command_runner.go:130] > Restart=on-failure
	I0416 18:00:06.656235    6988 command_runner.go:130] > Environment=NO_PROXY=172.19.91.227
	I0416 18:00:06.656235    6988 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0416 18:00:06.656235    6988 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0416 18:00:06.656235    6988 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0416 18:00:06.656235    6988 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0416 18:00:06.656235    6988 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0416 18:00:06.656235    6988 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0416 18:00:06.656235    6988 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0416 18:00:06.656235    6988 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0416 18:00:06.656235    6988 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0416 18:00:06.656235    6988 command_runner.go:130] > ExecStart=
	I0416 18:00:06.656778    6988 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0416 18:00:06.656778    6988 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0416 18:00:06.656820    6988 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0416 18:00:06.656870    6988 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0416 18:00:06.656870    6988 command_runner.go:130] > LimitNOFILE=infinity
	I0416 18:00:06.656911    6988 command_runner.go:130] > LimitNPROC=infinity
	I0416 18:00:06.656911    6988 command_runner.go:130] > LimitCORE=infinity
	I0416 18:00:06.656911    6988 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0416 18:00:06.656911    6988 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0416 18:00:06.656911    6988 command_runner.go:130] > TasksMax=infinity
	I0416 18:00:06.656911    6988 command_runner.go:130] > TimeoutStartSec=0
	I0416 18:00:06.656911    6988 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0416 18:00:06.656911    6988 command_runner.go:130] > Delegate=yes
	I0416 18:00:06.656911    6988 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0416 18:00:06.656911    6988 command_runner.go:130] > KillMode=process
	I0416 18:00:06.656911    6988 command_runner.go:130] > [Install]
	I0416 18:00:06.656911    6988 command_runner.go:130] > WantedBy=multi-user.target
	I0416 18:00:06.666231    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 18:00:06.697894    6988 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 18:00:06.737622    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 18:00:06.771467    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 18:00:06.804240    6988 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 18:00:06.854175    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 18:00:06.875932    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 18:00:06.907847    6988 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0416 18:00:06.916941    6988 ssh_runner.go:195] Run: which cri-dockerd
	I0416 18:00:06.922573    6988 command_runner.go:130] > /usr/bin/cri-dockerd
	I0416 18:00:06.930663    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 18:00:06.948367    6988 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 18:00:06.987048    6988 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 18:00:07.191969    6988 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 18:00:07.382844    6988 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 18:00:07.382971    6988 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 18:00:07.425295    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:00:07.611967    6988 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 18:00:10.072387    6988 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.460242s)
	I0416 18:00:10.082602    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0416 18:00:10.120067    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 18:00:10.155302    6988 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0416 18:00:10.359234    6988 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0416 18:00:10.554817    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:00:10.747932    6988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0416 18:00:10.786544    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 18:00:10.819302    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:00:10.999957    6988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0416 18:00:11.099015    6988 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0416 18:00:11.111636    6988 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0416 18:00:11.122504    6988 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0416 18:00:11.122504    6988 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0416 18:00:11.122504    6988 command_runner.go:130] > Device: 0,22	Inode: 871         Links: 1
	I0416 18:00:11.122504    6988 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0416 18:00:11.122504    6988 command_runner.go:130] > Access: 2024-04-16 18:00:11.194886190 +0000
	I0416 18:00:11.122504    6988 command_runner.go:130] > Modify: 2024-04-16 18:00:11.194886190 +0000
	I0416 18:00:11.122504    6988 command_runner.go:130] > Change: 2024-04-16 18:00:11.200886564 +0000
	I0416 18:00:11.122504    6988 command_runner.go:130] >  Birth: -
	I0416 18:00:11.122504    6988 start.go:562] Will wait 60s for crictl version
	I0416 18:00:11.131362    6988 ssh_runner.go:195] Run: which crictl
	I0416 18:00:11.136657    6988 command_runner.go:130] > /usr/bin/crictl
	I0416 18:00:11.146046    6988 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 18:00:11.199867    6988 command_runner.go:130] > Version:  0.1.0
	I0416 18:00:11.199867    6988 command_runner.go:130] > RuntimeName:  docker
	I0416 18:00:11.199867    6988 command_runner.go:130] > RuntimeVersion:  26.0.1
	I0416 18:00:11.199867    6988 command_runner.go:130] > RuntimeApiVersion:  v1
	I0416 18:00:11.199867    6988 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0416 18:00:11.205859    6988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 18:00:11.237864    6988 command_runner.go:130] > 26.0.1
	I0416 18:00:11.245954    6988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 18:00:11.279233    6988 command_runner.go:130] > 26.0.1
	I0416 18:00:11.280642    6988 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
	I0416 18:00:11.281457    6988 out.go:177]   - env NO_PROXY=172.19.91.227
	I0416 18:00:11.282089    6988 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0416 18:00:11.285919    6988 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0416 18:00:11.285919    6988 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0416 18:00:11.285919    6988 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0416 18:00:11.285919    6988 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:6f:a4 Flags:up|broadcast|multicast|running}
	I0416 18:00:11.289016    6988 ip.go:210] interface addr: fe80::6b96:eca7:5afa:def5/64
	I0416 18:00:11.289092    6988 ip.go:210] interface addr: 172.19.80.1/20
	I0416 18:00:11.297335    6988 ssh_runner.go:195] Run: grep 172.19.80.1	host.minikube.internal$ /etc/hosts
	I0416 18:00:11.303557    6988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 18:00:11.324932    6988 mustload.go:65] Loading cluster: multinode-945500
	I0416 18:00:11.324932    6988 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:00:11.326302    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:00:13.285643    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:00:13.285643    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:00:13.285643    6988 host.go:66] Checking if "multinode-945500" exists ...
	I0416 18:00:13.285961    6988 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500 for IP: 172.19.91.6
	I0416 18:00:13.285961    6988 certs.go:194] generating shared ca certs ...
	I0416 18:00:13.285961    6988 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:00:13.286821    6988 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0416 18:00:13.287059    6988 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0416 18:00:13.287230    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 18:00:13.287572    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0416 18:00:13.287754    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 18:00:13.287938    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 18:00:13.288586    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem (1338 bytes)
	W0416 18:00:13.288985    6988 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324_empty.pem, impossibly tiny 0 bytes
	I0416 18:00:13.289144    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0416 18:00:13.289487    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0416 18:00:13.289775    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0416 18:00:13.290139    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0416 18:00:13.290481    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem (1708 bytes)
	I0416 18:00:13.290481    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /usr/share/ca-certificates/53242.pem
	I0416 18:00:13.291100    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:00:13.291100    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem -> /usr/share/ca-certificates/5324.pem
	I0416 18:00:13.291100    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 18:00:13.340860    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 18:00:13.392323    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 18:00:13.436417    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 18:00:13.477907    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /usr/share/ca-certificates/53242.pem (1708 bytes)
	I0416 18:00:13.525089    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 18:00:13.566780    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem --> /usr/share/ca-certificates/5324.pem (1338 bytes)
	I0416 18:00:13.622111    6988 ssh_runner.go:195] Run: openssl version
	I0416 18:00:13.630969    6988 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0416 18:00:13.644134    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/53242.pem && ln -fs /usr/share/ca-certificates/53242.pem /etc/ssl/certs/53242.pem"
	I0416 18:00:13.673969    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/53242.pem
	I0416 18:00:13.680217    6988 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 16 16:35 /usr/share/ca-certificates/53242.pem
	I0416 18:00:13.680500    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:35 /usr/share/ca-certificates/53242.pem
	I0416 18:00:13.688237    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/53242.pem
	I0416 18:00:13.696922    6988 command_runner.go:130] > 3ec20f2e
	I0416 18:00:13.708831    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/53242.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 18:00:13.733581    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 18:00:13.760217    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:00:13.766741    6988 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 16 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:00:13.767776    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:00:13.776508    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:00:13.784406    6988 command_runner.go:130] > b5213941
	I0416 18:00:13.793775    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 18:00:13.827353    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5324.pem && ln -fs /usr/share/ca-certificates/5324.pem /etc/ssl/certs/5324.pem"
	I0416 18:00:13.855989    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5324.pem
	I0416 18:00:13.863594    6988 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 16 16:35 /usr/share/ca-certificates/5324.pem
	I0416 18:00:13.863671    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:35 /usr/share/ca-certificates/5324.pem
	I0416 18:00:13.872713    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5324.pem
	I0416 18:00:13.881385    6988 command_runner.go:130] > 51391683
	I0416 18:00:13.891867    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5324.pem /etc/ssl/certs/51391683.0"
	I0416 18:00:13.919310    6988 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 18:00:13.925213    6988 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 18:00:13.925213    6988 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 18:00:13.925406    6988 kubeadm.go:928] updating node {m02 172.19.91.6 8443 v1.29.3 docker false true} ...
	I0416 18:00:13.925406    6988 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-945500-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.91.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 18:00:13.933333    6988 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 18:00:13.949475    6988 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	I0416 18:00:13.949595    6988 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0416 18:00:13.961381    6988 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0416 18:00:13.978194    6988 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0416 18:00:13.978194    6988 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256
	I0416 18:00:13.978194    6988 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256
	I0416 18:00:13.978338    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0416 18:00:13.978338    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0416 18:00:13.989548    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 18:00:13.989548    6988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0416 18:00:13.997857    6988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0416 18:00:14.012312    6988 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0416 18:00:14.012312    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0416 18:00:14.012312    6988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0416 18:00:14.012312    6988 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0416 18:00:14.012312    6988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0416 18:00:14.012312    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0416 18:00:14.012312    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0416 18:00:14.024318    6988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0416 18:00:14.111282    6988 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0416 18:00:14.111282    6988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0416 18:00:14.111282    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0416 18:00:15.159706    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0416 18:00:15.176637    6988 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0416 18:00:15.206211    6988 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 18:00:15.245325    6988 ssh_runner.go:195] Run: grep 172.19.91.227	control-plane.minikube.internal$ /etc/hosts
	I0416 18:00:15.251624    6988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.91.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 18:00:15.280749    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:00:15.453073    6988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 18:00:15.479748    6988 host.go:66] Checking if "multinode-945500" exists ...
	I0416 18:00:15.480950    6988 start.go:316] joinCluster: &{Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.91.227 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.91.6 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpi
ration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 18:00:15.481069    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0416 18:00:15.481184    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:00:17.505631    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:00:17.505631    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:00:17.506531    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:00:19.802120    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 18:00:19.802120    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:00:19.802309    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 18:00:19.993353    6988 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token gegaat.x425l3cmfd8uouwr --discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c 
	I0416 18:00:19.993446    6988 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0": (4.5121206s)
	I0416 18:00:19.993446    6988 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.19.91.6 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0416 18:00:19.993532    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gegaat.x425l3cmfd8uouwr --discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-945500-m02"
	I0416 18:00:20.187968    6988 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 18:00:21.976702    6988 command_runner.go:130] > [preflight] Running pre-flight checks
	I0416 18:00:21.976807    6988 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0416 18:00:21.976807    6988 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0416 18:00:21.976877    6988 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 18:00:21.976877    6988 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 18:00:21.976877    6988 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0416 18:00:21.976877    6988 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0416 18:00:21.976877    6988 command_runner.go:130] > This node has joined the cluster:
	I0416 18:00:21.976877    6988 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0416 18:00:21.976946    6988 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0416 18:00:21.976946    6988 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0416 18:00:21.977006    6988 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gegaat.x425l3cmfd8uouwr --discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-945500-m02": (1.9833608s)
	I0416 18:00:21.977121    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0416 18:00:22.175327    6988 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0416 18:00:22.347211    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-945500-m02 minikube.k8s.io/updated_at=2024_04_16T18_00_22_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4 minikube.k8s.io/name=multinode-945500 minikube.k8s.io/primary=false
	I0416 18:00:22.461008    6988 command_runner.go:130] > node/multinode-945500-m02 labeled
	I0416 18:00:22.461089    6988 start.go:318] duration metric: took 6.9798519s to joinCluster
	I0416 18:00:22.461089    6988 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.19.91.6 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0416 18:00:22.462104    6988 out.go:177] * Verifying Kubernetes components...
	I0416 18:00:22.462104    6988 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:00:22.473344    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:00:22.642951    6988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 18:00:22.666251    6988 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 18:00:22.666816    6988 kapi.go:59] client config for multinode-945500: &rest.Config{Host:"https://172.19.91.227:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef16c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 18:00:22.667170    6988 node_ready.go:35] waiting up to 6m0s for node "multinode-945500-m02" to be "Ready" ...
	I0416 18:00:22.667170    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:22.667170    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:22.667170    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:22.667170    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:22.680255    6988 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0416 18:00:22.680255    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:22.680255    6988 round_trippers.go:580]     Audit-Id: 79e76c8e-11df-4387-9f30-9f5f1755a5e0
	I0416 18:00:22.680255    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:22.680255    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:22.680255    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:22.680255    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:22.680255    6988 round_trippers.go:580]     Content-Length: 3925
	I0416 18:00:22.680255    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:22 GMT
	I0416 18:00:22.680255    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"594","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl" [truncated 2901 chars]
	I0416 18:00:23.181369    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:23.181855    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:23.181855    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:23.181855    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:23.186449    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:23.186582    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:23.186582    6988 round_trippers.go:580]     Audit-Id: 4bae6118-587b-4d9b-a922-3970c34bf8ba
	I0416 18:00:23.186582    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:23.186582    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:23.186673    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:23.186717    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:23.186756    6988 round_trippers.go:580]     Content-Length: 3925
	I0416 18:00:23.186756    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:23 GMT
	I0416 18:00:23.186949    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"594","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl" [truncated 2901 chars]
	I0416 18:00:23.677191    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:23.677191    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:23.677317    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:23.677317    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:23.680492    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:23.680492    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:23.680492    6988 round_trippers.go:580]     Audit-Id: a7f57610-9860-47cd-ab38-3f286c67dceb
	I0416 18:00:23.680492    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:23.680492    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:23.680492    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:23.680492    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:23.680492    6988 round_trippers.go:580]     Content-Length: 3925
	I0416 18:00:23.680492    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:23 GMT
	I0416 18:00:23.681055    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"594","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl" [truncated 2901 chars]
	I0416 18:00:24.175480    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:24.175572    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:24.175572    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:24.175572    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:24.179352    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:24.179352    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:24.179352    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:24.179352    6988 round_trippers.go:580]     Content-Length: 3925
	I0416 18:00:24.179352    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:24 GMT
	I0416 18:00:24.179352    6988 round_trippers.go:580]     Audit-Id: aacf48fe-adbc-4413-b29d-2b958ba7f686
	I0416 18:00:24.179352    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:24.179352    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:24.179352    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:24.179613    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"594","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl" [truncated 2901 chars]
	I0416 18:00:24.673856    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:24.673925    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:24.673925    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:24.673925    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:24.676592    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:24.676592    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:24.676592    6988 round_trippers.go:580]     Audit-Id: 000742e0-7f5e-446d-8a61-8bd8bd82aedc
	I0416 18:00:24.676592    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:24.676592    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:24.676592    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:24.676592    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:24.676592    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:24.676592    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:24 GMT
	I0416 18:00:24.677350    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:24.677739    6988 node_ready.go:53] node "multinode-945500-m02" has status "Ready":"False"
	I0416 18:00:25.170259    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:25.170259    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:25.170259    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:25.170259    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:25.173426    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:25.173426    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:25.173426    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:25 GMT
	I0416 18:00:25.173426    6988 round_trippers.go:580]     Audit-Id: f9c1a393-b288-45a4-98d3-52d7af11f587
	I0416 18:00:25.173426    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:25.173426    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:25.173426    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:25.173426    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:25.173426    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:25.173964    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:25.669435    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:25.669435    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:25.669435    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:25.669530    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:25.672183    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:25.672183    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:25.672183    6988 round_trippers.go:580]     Audit-Id: 56bf1cb1-d49e-4031-8ee9-9392bbe1f6c8
	I0416 18:00:25.672183    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:25.672183    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:25.672183    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:25.672183    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:25.672183    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:25.673192    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:25 GMT
	I0416 18:00:25.673265    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:26.181911    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:26.182121    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:26.182121    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:26.182121    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:26.186490    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:26.186490    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:26.186490    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:26.186490    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:26.186490    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:26.186490    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:26.186490    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:26 GMT
	I0416 18:00:26.186580    6988 round_trippers.go:580]     Audit-Id: 88264325-f44e-4d75-8f22-6b8c5c0e9719
	I0416 18:00:26.186580    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:26.186613    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:26.679044    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:26.679044    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:26.679044    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:26.679044    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:26.683356    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:26.683356    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:26.683356    6988 round_trippers.go:580]     Audit-Id: c54e17f7-7d89-4371-9a95-03073ffa0ffb
	I0416 18:00:26.683356    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:26.683356    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:26.683356    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:26.683356    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:26.683356    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:26.683527    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:26 GMT
	I0416 18:00:26.683689    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:26.683980    6988 node_ready.go:53] node "multinode-945500-m02" has status "Ready":"False"
	I0416 18:00:27.180698    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:27.180698    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:27.181090    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:27.181090    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:27.184901    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:27.184901    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:27.184901    6988 round_trippers.go:580]     Audit-Id: b36ab219-082e-454d-8277-5ffcef9ec16b
	I0416 18:00:27.184901    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:27.184901    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:27.184901    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:27.184901    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:27.185540    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:27.185540    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:27 GMT
	I0416 18:00:27.185671    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:27.678872    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:27.678872    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:27.678975    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:27.678975    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:27.682351    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:27.683004    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:27.683054    6988 round_trippers.go:580]     Audit-Id: f599c3f7-7c68-4f15-8953-bfd791eb0198
	I0416 18:00:27.683054    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:27.683054    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:27.683054    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:27.683054    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:27.683054    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:27.683054    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:27 GMT
	I0416 18:00:27.683286    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:28.183860    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:28.183860    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:28.183860    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:28.183860    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:28.186319    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:28.186319    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:28.186319    6988 round_trippers.go:580]     Audit-Id: 872de824-f646-4d43-860c-2165005c98a0
	I0416 18:00:28.186319    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:28.186319    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:28.186319    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:28.186319    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:28.186319    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:28.186319    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:28 GMT
	I0416 18:00:28.187336    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:28.670992    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:28.670992    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:28.670992    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:28.670992    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:28.675123    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:28.675123    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:28.675123    6988 round_trippers.go:580]     Audit-Id: 098493ef-9038-4b08-bf9e-667a6c61491f
	I0416 18:00:28.675123    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:28.675123    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:28.675123    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:28.675123    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:28.675123    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:28.675123    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:28 GMT
	I0416 18:00:28.675123    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:29.174836    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:29.174890    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:29.174945    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:29.174945    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:29.179018    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:29.179018    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:29.179018    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:29 GMT
	I0416 18:00:29.179018    6988 round_trippers.go:580]     Audit-Id: c31ffe7d-9164-4329-85bd-7a52ce9c45ff
	I0416 18:00:29.179018    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:29.179018    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:29.179018    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:29.179018    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:29.179018    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:29.179018    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:29.179706    6988 node_ready.go:53] node "multinode-945500-m02" has status "Ready":"False"
	I0416 18:00:29.677336    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:29.677336    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:29.677336    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:29.677336    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:29.681001    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:29.681227    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:29.681286    6988 round_trippers.go:580]     Audit-Id: 389d232b-c9c8-4769-869a-1c7205097848
	I0416 18:00:29.681330    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:29.681330    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:29.681367    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:29.681367    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:29.681367    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:29.681367    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:29 GMT
	I0416 18:00:29.681367    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:30.179989    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:30.179989    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:30.179989    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:30.179989    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:30.184557    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:30.184557    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:30.184860    6988 round_trippers.go:580]     Audit-Id: 2d0a23fe-1858-420a-8f7d-89a4ab9e2074
	I0416 18:00:30.184860    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:30.184860    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:30.184860    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:30.184860    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:30.184860    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:30.184860    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:30 GMT
	I0416 18:00:30.185147    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:30.678172    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:30.678172    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:30.678172    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:30.678172    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:30.681395    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:30.681395    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:30.681395    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:30.681395    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:30.681395    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:30.681395    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:30.681395    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:30.681395    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:30 GMT
	I0416 18:00:30.682030    6988 round_trippers.go:580]     Audit-Id: d89d2b5b-078b-40e7-a8de-db37ba442614
	I0416 18:00:30.682245    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:31.177211    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:31.177533    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:31.177533    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:31.177533    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:31.252985    6988 round_trippers.go:574] Response Status: 200 OK in 75 milliseconds
	I0416 18:00:31.252985    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:31.252985    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:31.252985    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:31.252985    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:31.252985    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:31.252985    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:31 GMT
	I0416 18:00:31.252985    6988 round_trippers.go:580]     Audit-Id: 874c3508-0079-436c-9ee6-4bfd92a9fb2a
	I0416 18:00:31.253576    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:31.253576    6988 node_ready.go:53] node "multinode-945500-m02" has status "Ready":"False"
	I0416 18:00:31.682017    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:31.682017    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:31.682017    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:31.682017    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:31.684916    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:31.685729    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:31.685729    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:31.685729    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:31 GMT
	I0416 18:00:31.685729    6988 round_trippers.go:580]     Audit-Id: d159045d-d37c-4252-bd61-8c73f50b03f8
	I0416 18:00:31.685830    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:31.685830    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:31.685830    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:31.685830    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:32.173658    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:32.173658    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:32.173658    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:32.173658    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:32.177586    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:32.177586    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:32.177586    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:32.177586    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:32 GMT
	I0416 18:00:32.177586    6988 round_trippers.go:580]     Audit-Id: d53ca0a9-698a-4e2e-92c6-bda133162c76
	I0416 18:00:32.177586    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:32.177586    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:32.177586    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:32.178475    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:32.678024    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:32.678024    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:32.678024    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:32.678024    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:32.682085    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:32.682614    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:32.682614    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:32.682614    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:32 GMT
	I0416 18:00:32.682614    6988 round_trippers.go:580]     Audit-Id: 165d0d28-6574-4108-94db-5907ad039dd6
	I0416 18:00:32.682614    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:32.682614    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:32.682684    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:32.682989    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:33.168664    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:33.168922    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:33.168922    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:33.168922    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:33.172390    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:33.172390    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:33.172390    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:33.172390    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:33.172390    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:33.172390    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:33.172390    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:33 GMT
	I0416 18:00:33.172390    6988 round_trippers.go:580]     Audit-Id: ba696923-3f1a-4e11-8165-651eef11660a
	I0416 18:00:33.173411    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:33.676259    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:33.676259    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:33.676259    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:33.676259    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:33.680629    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:33.680629    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:33.680629    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:33.681219    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:33.681219    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:33.681219    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:33 GMT
	I0416 18:00:33.681219    6988 round_trippers.go:580]     Audit-Id: 7be99938-6273-447f-8367-634cd5f0a4de
	I0416 18:00:33.681219    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:33.681531    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:33.682462    6988 node_ready.go:53] node "multinode-945500-m02" has status "Ready":"False"
	I0416 18:00:34.178701    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:34.178701    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:34.178701    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:34.178701    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:34.181286    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:34.181286    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:34.181286    6988 round_trippers.go:580]     Audit-Id: f6019dfe-ab29-48d8-9d01-ee729ec66029
	I0416 18:00:34.181286    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:34.181286    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:34.181286    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:34.181286    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:34.181286    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:34 GMT
	I0416 18:00:34.181975    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:34.669380    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:34.669668    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:34.669668    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:34.669668    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:34.672465    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:34.672465    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:34.672465    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:34.672465    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:34.672465    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:34.672465    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:34 GMT
	I0416 18:00:34.672465    6988 round_trippers.go:580]     Audit-Id: a8719766-b414-4604-94c0-e20be6a01464
	I0416 18:00:34.672465    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:34.673674    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:35.169393    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:35.169618    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:35.169692    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:35.169692    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:35.174028    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:35.174028    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:35.174209    6988 round_trippers.go:580]     Audit-Id: ea553a57-8167-487c-a417-8cf0ded53743
	I0416 18:00:35.174209    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:35.174209    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:35.174209    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:35.174209    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:35.174209    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:35 GMT
	I0416 18:00:35.174511    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:35.682247    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:35.682650    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:35.682650    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:35.682650    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:35.685938    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:35.685938    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:35.685938    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:35.685938    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:35.685938    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:35.685938    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:35.685938    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:35 GMT
	I0416 18:00:35.685938    6988 round_trippers.go:580]     Audit-Id: 82dc03b1-e6f8-433d-ac2b-277fc69a2b99
	I0416 18:00:35.686923    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:35.687544    6988 node_ready.go:53] node "multinode-945500-m02" has status "Ready":"False"
	I0416 18:00:36.182291    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:36.182393    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:36.182393    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:36.182442    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:36.190024    6988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0416 18:00:36.190024    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:36.190024    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:36.190024    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:36.190024    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:36.190024    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:36.190024    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:36 GMT
	I0416 18:00:36.190024    6988 round_trippers.go:580]     Audit-Id: a48a8529-ba4d-49a4-90a4-d4a77c7c5001
	I0416 18:00:36.190657    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:36.677065    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:36.677162    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:36.677162    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:36.677162    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:36.680646    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:36.680646    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:36.680646    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:36.680646    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:36.680646    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:36 GMT
	I0416 18:00:36.681185    6988 round_trippers.go:580]     Audit-Id: e4e94e54-d688-4263-a0ef-d154f5f4abeb
	I0416 18:00:36.681185    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:36.681185    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:36.681442    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:37.174195    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:37.174195    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:37.174634    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:37.174634    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:37.178029    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:37.178029    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:37.178830    6988 round_trippers.go:580]     Audit-Id: 55aa8476-6f9d-4256-9569-30e89b1a496b
	I0416 18:00:37.178830    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:37.178830    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:37.178830    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:37.178830    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:37.178830    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:37 GMT
	I0416 18:00:37.179087    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:37.673081    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:37.673348    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:37.673425    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:37.673425    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:37.677095    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:37.677095    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:37.677095    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:37.677095    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:37.677193    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:37.677193    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:37 GMT
	I0416 18:00:37.677193    6988 round_trippers.go:580]     Audit-Id: f84a1c1a-51f5-4ca5-aedb-2f21bb70141f
	I0416 18:00:37.677193    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:37.677583    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:38.171025    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:38.171133    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:38.171133    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:38.171133    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:38.174956    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:38.174956    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:38.174956    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:38.174956    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:38.175478    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:38 GMT
	I0416 18:00:38.175478    6988 round_trippers.go:580]     Audit-Id: ad79e752-a790-4167-88de-0fa0a1ce2c7f
	I0416 18:00:38.175478    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:38.175478    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:38.175685    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:38.176345    6988 node_ready.go:53] node "multinode-945500-m02" has status "Ready":"False"
	I0416 18:00:38.682781    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:38.682781    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:38.682781    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:38.682875    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:38.687443    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:38.687443    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:38.687443    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:38 GMT
	I0416 18:00:38.687443    6988 round_trippers.go:580]     Audit-Id: 9f833ee4-3fc1-4823-99f9-056bf39a2137
	I0416 18:00:38.687443    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:38.687443    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:38.687443    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:38.687443    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:38.687880    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:39.181718    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:39.181718    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:39.181718    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:39.181718    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:39.185234    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:39.185234    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:39.185234    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:39 GMT
	I0416 18:00:39.185234    6988 round_trippers.go:580]     Audit-Id: c944df6e-2f72-4b2f-84ed-0ef01d4bf4ad
	I0416 18:00:39.185234    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:39.185234    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:39.185234    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:39.185234    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:39.186227    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:39.679471    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:39.679471    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:39.679471    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:39.679471    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:39.683435    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:39.683435    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:39.683435    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:39.683435    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:39.683435    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:39.683435    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:39 GMT
	I0416 18:00:39.683435    6988 round_trippers.go:580]     Audit-Id: 72ce3907-afe5-4673-a364-1b0ade9a63a2
	I0416 18:00:39.683435    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:39.684439    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:40.179709    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:40.179709    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:40.179709    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:40.179709    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:40.182280    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:40.182280    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:40.182280    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:40.182280    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:40.182280    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:40.182280    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:40 GMT
	I0416 18:00:40.182280    6988 round_trippers.go:580]     Audit-Id: 15242798-963e-4292-8f78-c57c95f730a6
	I0416 18:00:40.182280    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:40.183037    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:40.183378    6988 node_ready.go:53] node "multinode-945500-m02" has status "Ready":"False"
	I0416 18:00:40.679352    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:40.679436    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:40.679436    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:40.679436    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:40.682752    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:40.682752    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:40.682752    6988 round_trippers.go:580]     Audit-Id: e11e0806-566d-477a-bcb8-8829648fc79a
	I0416 18:00:40.682752    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:40.682752    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:40.682752    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:40.682752    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:40.682752    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:40 GMT
	I0416 18:00:40.683363    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:41.181519    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:41.181623    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.181623    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.181623    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.184563    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.184563    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.184563    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.184563    6988 round_trippers.go:580]     Audit-Id: 8c5f2f81-67e0-45b9-81aa-b9f9cb72a322
	I0416 18:00:41.184563    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.185366    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.185366    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.185366    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.185630    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"630","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3268 chars]
	I0416 18:00:41.186155    6988 node_ready.go:49] node "multinode-945500-m02" has status "Ready":"True"
	I0416 18:00:41.186155    6988 node_ready.go:38] duration metric: took 18.5179332s for node "multinode-945500-m02" to be "Ready" ...
	I0416 18:00:41.186235    6988 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 18:00:41.186380    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods
	I0416 18:00:41.186380    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.186380    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.186461    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.190907    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:41.191511    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.191511    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.191511    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.191511    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.191511    6988 round_trippers.go:580]     Audit-Id: 5b40846d-502b-40b4-b4e6-b0c0c199dcda
	I0416 18:00:41.191511    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.191511    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.194735    6988 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"630"},"items":[{"metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"441","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 70406 chars]
	I0416 18:00:41.197721    6988 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-86z7h" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.197721    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 18:00:41.197721    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.197721    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.197721    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.200304    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.201307    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.201307    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.201307    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.201307    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.201307    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.201307    6988 round_trippers.go:580]     Audit-Id: ddd585b2-d4a5-4fc9-9e78-3d162e0d75cf
	I0416 18:00:41.201307    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.201671    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"441","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0416 18:00:41.202254    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 18:00:41.202254    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.202254    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.202254    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.204830    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.204830    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.204830    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.204830    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.204830    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.204830    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.204830    6988 round_trippers.go:580]     Audit-Id: 5615a17f-6d55-4784-b914-b1262342e4ef
	I0416 18:00:41.204830    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.205530    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"452","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4966 chars]
	I0416 18:00:41.206190    6988 pod_ready.go:92] pod "coredns-76f75df574-86z7h" in "kube-system" namespace has status "Ready":"True"
	I0416 18:00:41.206190    6988 pod_ready.go:81] duration metric: took 8.4686ms for pod "coredns-76f75df574-86z7h" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.206190    6988 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.206190    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-945500
	I0416 18:00:41.206190    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.206190    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.206190    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.208799    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.208799    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.208799    6988 round_trippers.go:580]     Audit-Id: ae8a0c71-2dd6-45b7-96d9-80a7e15fec82
	I0416 18:00:41.208799    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.208799    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.208799    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.208799    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.208799    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.209788    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-945500","namespace":"kube-system","uid":"245cef70-3506-471b-9bf6-dd14a2c23d8c","resourceVersion":"372","creationTimestamp":"2024-04-16T17:57:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.91.227:2379","kubernetes.io/config.hash":"c735a32dacf9631b2b4787fe99cff316","kubernetes.io/config.mirror":"c735a32dacf9631b2b4787fe99cff316","kubernetes.io/config.seen":"2024-04-16T17:57:28.101466445Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0416 18:00:41.209825    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 18:00:41.209825    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.209825    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.209825    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.211989    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.211989    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.211989    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.211989    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.211989    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.211989    6988 round_trippers.go:580]     Audit-Id: 0c5d029c-085b-4f7e-a116-d1258a75da93
	I0416 18:00:41.211989    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.211989    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.213223    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"452","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4966 chars]
	I0416 18:00:41.213811    6988 pod_ready.go:92] pod "etcd-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 18:00:41.213811    6988 pod_ready.go:81] duration metric: took 7.62ms for pod "etcd-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.213811    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.213811    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-945500
	I0416 18:00:41.213811    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.213811    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.213811    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.216448    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.216448    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.216448    6988 round_trippers.go:580]     Audit-Id: 6b2d211f-a673-4f75-931c-2de9b00a2806
	I0416 18:00:41.216448    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.216448    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.216448    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.216448    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.216448    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.217191    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-945500","namespace":"kube-system","uid":"c6ae969a-de5d-4c7e-af09-b1a5eb21f2ab","resourceVersion":"314","creationTimestamp":"2024-04-16T17:57:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.91.227:8443","kubernetes.io/config.hash":"564fae5a3e5851c815d6092b123a5395","kubernetes.io/config.mirror":"564fae5a3e5851c815d6092b123a5395","kubernetes.io/config.seen":"2024-04-16T17:57:28.101471746Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0416 18:00:41.217191    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 18:00:41.217778    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.217778    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.217778    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.219971    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.219971    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.219971    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.219971    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.219971    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.219971    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.219971    6988 round_trippers.go:580]     Audit-Id: 97c48e0c-3227-4fdb-bb53-2c5b0a99e16e
	I0416 18:00:41.219971    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.220674    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"452","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4966 chars]
	I0416 18:00:41.220674    6988 pod_ready.go:92] pod "kube-apiserver-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 18:00:41.220674    6988 pod_ready.go:81] duration metric: took 6.8627ms for pod "kube-apiserver-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.220674    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.220674    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-945500
	I0416 18:00:41.221243    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.221243    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.221243    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.223295    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.223295    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.224145    6988 round_trippers.go:580]     Audit-Id: 5ff785c8-f305-4111-b54a-6d01717ce756
	I0416 18:00:41.224182    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.224223    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.224223    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.224223    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.224315    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.224478    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-945500","namespace":"kube-system","uid":"01b937c2-9827-4240-83f0-3536fec5eb5e","resourceVersion":"345","creationTimestamp":"2024-04-16T17:57:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5db71de2029227779432bddd337fc81d","kubernetes.io/config.mirror":"5db71de2029227779432bddd337fc81d","kubernetes.io/config.seen":"2024-04-16T17:57:28.101473146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0416 18:00:41.225131    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 18:00:41.225131    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.225131    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.225131    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.231431    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 18:00:41.231431    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.231431    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.231431    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.231431    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.231431    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.231431    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.231431    6988 round_trippers.go:580]     Audit-Id: d45b4d6a-ea94-4484-87ef-fd18b35ed725
	I0416 18:00:41.231431    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"452","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4966 chars]
	I0416 18:00:41.232071    6988 pod_ready.go:92] pod "kube-controller-manager-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 18:00:41.232071    6988 pod_ready.go:81] duration metric: took 11.3966ms for pod "kube-controller-manager-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.232071    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q5bdr" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.382236    6988 request.go:629] Waited for 150.1565ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q5bdr
	I0416 18:00:41.382407    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q5bdr
	I0416 18:00:41.382407    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.382407    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.382407    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.385083    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.385083    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.385083    6988 round_trippers.go:580]     Audit-Id: b4d8ec79-02a6-45ad-9ecc-b7b22761dffb
	I0416 18:00:41.385083    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.385083    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.385083    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.385083    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.385083    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.385507    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q5bdr","generateName":"kube-proxy-","namespace":"kube-system","uid":"18f90e3f-dd52-44a3-918a-66181a779f58","resourceVersion":"614","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"83f1bde2-7175-4a0f-944e-61200d7e7177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83f1bde2-7175-4a0f-944e-61200d7e7177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5826 chars]
	I0416 18:00:41.585818    6988 request.go:629] Waited for 199.7761ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:41.585818    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:41.585818    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.586164    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.586164    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.590196    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:41.590196    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.590196    6988 round_trippers.go:580]     Audit-Id: 1d479fce-49d7-483b-a6cd-e9bad5ef24c8
	I0416 18:00:41.590196    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.590196    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.590196    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.590196    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.590196    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.590196    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"630","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3268 chars]
	I0416 18:00:41.590835    6988 pod_ready.go:92] pod "kube-proxy-q5bdr" in "kube-system" namespace has status "Ready":"True"
	I0416 18:00:41.590835    6988 pod_ready.go:81] duration metric: took 358.7431ms for pod "kube-proxy-q5bdr" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.590835    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rfxsg" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.787070    6988 request.go:629] Waited for 196.0845ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfxsg
	I0416 18:00:41.787761    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfxsg
	I0416 18:00:41.787761    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.787761    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.787761    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.791225    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:41.791225    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.791225    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.791225    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.791225    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.791225    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.791225    6988 round_trippers.go:580]     Audit-Id: 0948013e-ea2e-4863-bd44-98088c0ba200
	I0416 18:00:41.791225    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.792789    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rfxsg","generateName":"kube-proxy-","namespace":"kube-system","uid":"b740e6e0-4768-4dd4-a958-307662a92578","resourceVersion":"401","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"83f1bde2-7175-4a0f-944e-61200d7e7177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83f1bde2-7175-4a0f-944e-61200d7e7177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0416 18:00:41.990002    6988 request.go:629] Waited for 196.614ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 18:00:41.990240    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 18:00:41.990240    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.990240    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.990240    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.993828    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:41.993828    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.993828    6988 round_trippers.go:580]     Audit-Id: 604aaeac-f05a-47b3-96f5-af81155d3173
	I0416 18:00:41.993828    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.993828    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.993828    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.993828    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.993828    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:42 GMT
	I0416 18:00:41.994260    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"452","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4966 chars]
	I0416 18:00:41.994754    6988 pod_ready.go:92] pod "kube-proxy-rfxsg" in "kube-system" namespace has status "Ready":"True"
	I0416 18:00:41.994817    6988 pod_ready.go:81] duration metric: took 403.9592ms for pod "kube-proxy-rfxsg" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.994817    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:42.191736    6988 request.go:629] Waited for 196.6039ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-945500
	I0416 18:00:42.191828    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-945500
	I0416 18:00:42.191933    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:42.191933    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:42.191933    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:42.194567    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:42.194567    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:42.194567    6988 round_trippers.go:580]     Audit-Id: 6ab76f79-405f-48f9-ad04-90e78aa34737
	I0416 18:00:42.194567    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:42.194567    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:42.194567    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:42.194567    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:42.195203    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:42 GMT
	I0416 18:00:42.195382    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-945500","namespace":"kube-system","uid":"a09e52e8-1ac2-4c22-8a3d-57969fae85a9","resourceVersion":"310","creationTimestamp":"2024-04-16T17:57:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4ebc73a23d79d1dece7469fd94c931d1","kubernetes.io/config.mirror":"4ebc73a23d79d1dece7469fd94c931d1","kubernetes.io/config.seen":"2024-04-16T17:57:20.694761708Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0416 18:00:42.393042    6988 request.go:629] Waited for 196.8309ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 18:00:42.393350    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 18:00:42.393350    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:42.393434    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:42.393434    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:42.396719    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:42.397078    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:42.397078    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:42.397078    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:42.397078    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:42 GMT
	I0416 18:00:42.397078    6988 round_trippers.go:580]     Audit-Id: ff7a49f1-7963-4872-babf-4857b06f6961
	I0416 18:00:42.397078    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:42.397078    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:42.397705    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"452","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4966 chars]
	I0416 18:00:42.397705    6988 pod_ready.go:92] pod "kube-scheduler-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 18:00:42.397705    6988 pod_ready.go:81] duration metric: took 402.8649ms for pod "kube-scheduler-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:42.397705    6988 pod_ready.go:38] duration metric: took 1.2114007s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 18:00:42.398226    6988 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 18:00:42.407057    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 18:00:42.430019    6988 system_svc.go:56] duration metric: took 31.7913ms WaitForService to wait for kubelet
	I0416 18:00:42.430019    6988 kubeadm.go:576] duration metric: took 19.9677952s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 18:00:42.430019    6988 node_conditions.go:102] verifying NodePressure condition ...
	I0416 18:00:42.594801    6988 request.go:629] Waited for 164.4742ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/nodes
	I0416 18:00:42.595048    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes
	I0416 18:00:42.595048    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:42.595156    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:42.595156    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:42.600192    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 18:00:42.600192    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:42.600192    6988 round_trippers.go:580]     Audit-Id: 7201947e-da4a-45b2-acc1-266f83b267ad
	I0416 18:00:42.600296    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:42.600296    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:42.600296    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:42.600296    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:42.600296    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:42 GMT
	I0416 18:00:42.600799    6988 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"633"},"items":[{"metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"452","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"mana
gedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1" [truncated 9279 chars]
	I0416 18:00:42.601645    6988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 18:00:42.601726    6988 node_conditions.go:123] node cpu capacity is 2
	I0416 18:00:42.601726    6988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 18:00:42.601726    6988 node_conditions.go:123] node cpu capacity is 2
	I0416 18:00:42.601726    6988 node_conditions.go:105] duration metric: took 171.6974ms to run NodePressure ...
	I0416 18:00:42.601799    6988 start.go:240] waiting for startup goroutines ...
	I0416 18:00:42.601887    6988 start.go:254] writing updated cluster config ...
	I0416 18:00:42.611423    6988 ssh_runner.go:195] Run: rm -f paused
	I0416 18:00:42.727143    6988 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0416 18:00:42.728491    6988 out.go:177] * Done! kubectl is now configured to use "multinode-945500" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 16 17:57:51 multinode-945500 dockerd[1329]: time="2024-04-16T17:57:51.274090773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:01:05 multinode-945500 dockerd[1329]: time="2024-04-16T18:01:05.483494643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:01:05 multinode-945500 dockerd[1329]: time="2024-04-16T18:01:05.483635748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:01:05 multinode-945500 dockerd[1329]: time="2024-04-16T18:01:05.483656849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:01:05 multinode-945500 dockerd[1329]: time="2024-04-16T18:01:05.485502118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:01:05 multinode-945500 cri-dockerd[1229]: time="2024-04-16T18:01:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c72a50cfb5bdeb4ceb5279eb60fe15681ce2bc5a0b4d7bd7d08ad490736a87c7/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 16 18:01:06 multinode-945500 cri-dockerd[1229]: time="2024-04-16T18:01:06Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 16 18:01:06 multinode-945500 dockerd[1329]: time="2024-04-16T18:01:06.790007462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:01:06 multinode-945500 dockerd[1329]: time="2024-04-16T18:01:06.790158272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:01:06 multinode-945500 dockerd[1329]: time="2024-04-16T18:01:06.790278279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:01:06 multinode-945500 dockerd[1329]: time="2024-04-16T18:01:06.790482592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:01:52 multinode-945500 dockerd[1323]: 2024/04/16 18:01:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:01:52 multinode-945500 dockerd[1323]: 2024/04/16 18:01:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:01:52 multinode-945500 dockerd[1323]: 2024/04/16 18:01:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:01:52 multinode-945500 dockerd[1323]: 2024/04/16 18:01:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:01:52 multinode-945500 dockerd[1323]: 2024/04/16 18:01:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:01:52 multinode-945500 dockerd[1323]: 2024/04/16 18:01:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:01:52 multinode-945500 dockerd[1323]: 2024/04/16 18:01:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:05:43 multinode-945500 dockerd[1323]: 2024/04/16 18:05:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:05:43 multinode-945500 dockerd[1323]: 2024/04/16 18:05:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:05:43 multinode-945500 dockerd[1323]: 2024/04/16 18:05:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:05:44 multinode-945500 dockerd[1323]: 2024/04/16 18:05:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:05:44 multinode-945500 dockerd[1323]: 2024/04/16 18:05:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:05:44 multinode-945500 dockerd[1323]: 2024/04/16 18:05:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:05:44 multinode-945500 dockerd[1323]: 2024/04/16 18:05:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1475366123af9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   5 minutes ago       Running             busybox                   0                   c72a50cfb5bde       busybox-7fdf7869d9-jxvx2
	6ad0b1d75a1e3       cbb01a7bd410d                                                                                         9 minutes ago       Running             coredns                   0                   2ba60ece6840a       coredns-76f75df574-86z7h
	2b470472d009f       6e38f40d628db                                                                                         9 minutes ago       Running             storage-provisioner       0                   6f233a9704eee       storage-provisioner
	cd37920f1d544       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              9 minutes ago       Running             kindnet-cni               0                   d2cd68d7f406d       kindnet-tp7jl
	f56880607ce1e       a1d263b5dc5b0                                                                                         9 minutes ago       Running             kube-proxy                0                   68766d2b671ff       kube-proxy-rfxsg
	736259e5d03b5       39f995c9f1996                                                                                         9 minutes ago       Running             kube-apiserver            0                   b8699d93388d0       kube-apiserver-multinode-945500
	4a7c8d9808b66       8c390d98f50c0                                                                                         9 minutes ago       Running             kube-scheduler            0                   ecb0ceb1a3fed       kube-scheduler-multinode-945500
	91288754cb0bd       6052a25da3f97                                                                                         9 minutes ago       Running             kube-controller-manager   0                   d28c611e06055       kube-controller-manager-multinode-945500
	0cae708a3787a       3861cfcd7c04c                                                                                         9 minutes ago       Running             etcd                      0                   5f7e5b16341d1       etcd-multinode-945500
	
	
	==> coredns [6ad0b1d75a1e] <==
	[INFO] 10.244.0.3:47642 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000140809s
	[INFO] 10.244.1.2:38063 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000393824s
	[INFO] 10.244.1.2:53430 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000153309s
	[INFO] 10.244.1.2:47690 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181411s
	[INFO] 10.244.1.2:40309 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145609s
	[INFO] 10.244.1.2:60258 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000052603s
	[INFO] 10.244.1.2:43597 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000068204s
	[INFO] 10.244.1.2:53767 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061503s
	[INFO] 10.244.1.2:54777 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056603s
	[INFO] 10.244.0.3:38964 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184311s
	[INFO] 10.244.0.3:53114 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074805s
	[INFO] 10.244.0.3:36074 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062204s
	[INFO] 10.244.0.3:60668 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090906s
	[INFO] 10.244.1.2:54659 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099206s
	[INFO] 10.244.1.2:41929 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080505s
	[INFO] 10.244.1.2:40931 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059704s
	[INFO] 10.244.1.2:48577 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000058804s
	[INFO] 10.244.0.3:33415 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000283317s
	[INFO] 10.244.0.3:52256 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000109407s
	[INFO] 10.244.0.3:34542 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000222014s
	[INFO] 10.244.0.3:59509 - 5 "PTR IN 1.80.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000278017s
	[INFO] 10.244.1.2:34647 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164509s
	[INFO] 10.244.1.2:44123 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000155309s
	[INFO] 10.244.1.2:47985 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000056403s
	[INFO] 10.244.1.2:38781 - 5 "PTR IN 1.80.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000051303s
	
	
	==> describe nodes <==
	Name:               multinode-945500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-945500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=multinode-945500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T17_57_28_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 17:57:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-945500
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 18:06:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 18:06:39 +0000   Tue, 16 Apr 2024 17:57:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 18:06:39 +0000   Tue, 16 Apr 2024 17:57:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 18:06:39 +0000   Tue, 16 Apr 2024 17:57:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 18:06:39 +0000   Tue, 16 Apr 2024 17:57:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.91.227
	  Hostname:    multinode-945500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 e85d34dd6c5848b4a3ec498b43e70cda
	  System UUID:                f07a2411-3a9a-ca4a-afc3-5ddc78eea33d
	  Boot ID:                    271a6251-2183-4573-9d3f-923b343cbbd7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-jxvx2                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	  kube-system                 coredns-76f75df574-86z7h                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m15s
	  kube-system                 etcd-multinode-945500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m27s
	  kube-system                 kindnet-tp7jl                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m15s
	  kube-system                 kube-apiserver-multinode-945500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m27s
	  kube-system                 kube-controller-manager-multinode-945500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m27s
	  kube-system                 kube-proxy-rfxsg                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m15s
	  kube-system                 kube-scheduler-multinode-945500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m28s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m13s                  kube-proxy       
	  Normal  Starting                 9m35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m35s (x8 over 9m35s)  kubelet          Node multinode-945500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m35s (x8 over 9m35s)  kubelet          Node multinode-945500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m35s (x7 over 9m35s)  kubelet          Node multinode-945500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m27s                  kubelet          Node multinode-945500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m27s                  kubelet          Node multinode-945500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m27s                  kubelet          Node multinode-945500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m16s                  node-controller  Node multinode-945500 event: Registered Node multinode-945500 in Controller
	  Normal  NodeReady                9m5s                   kubelet          Node multinode-945500 status is now: NodeReady
	
	
	Name:               multinode-945500-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-945500-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=multinode-945500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_16T18_00_22_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 18:00:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-945500-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 18:06:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 18:06:28 +0000   Tue, 16 Apr 2024 18:00:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 18:06:28 +0000   Tue, 16 Apr 2024 18:00:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 18:06:28 +0000   Tue, 16 Apr 2024 18:00:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 18:06:28 +0000   Tue, 16 Apr 2024 18:00:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.91.6
	  Hostname:    multinode-945500-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6ffb3ffe1886460d8f31c8166436085f
	  System UUID:                cd85b681-7c9d-6842-b820-50fe53a2fe10
	  Boot ID:                    391147f8-cd3e-46f1-9b23-dd3a04f0f3a0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-ns8nx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	  kube-system                 kindnet-7pg6g               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m34s
	  kube-system                 kube-proxy-q5bdr            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m23s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m34s (x2 over 6m34s)  kubelet          Node multinode-945500-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m34s (x2 over 6m34s)  kubelet          Node multinode-945500-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m34s (x2 over 6m34s)  kubelet          Node multinode-945500-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m31s                  node-controller  Node multinode-945500-m02 event: Registered Node multinode-945500-m02 in Controller
	  Normal  NodeReady                6m14s                  kubelet          Node multinode-945500-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr16 17:56] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.180108] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[ +28.712788] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.080808] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.453937] systemd-fstab-generator[983]: Ignoring "noauto" option for root device
	[  +0.161653] systemd-fstab-generator[996]: Ignoring "noauto" option for root device
	[  +0.200737] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	[  +2.669121] systemd-fstab-generator[1182]: Ignoring "noauto" option for root device
	[  +0.171244] systemd-fstab-generator[1194]: Ignoring "noauto" option for root device
	[  +0.164230] systemd-fstab-generator[1206]: Ignoring "noauto" option for root device
	[  +0.237653] systemd-fstab-generator[1221]: Ignoring "noauto" option for root device
	[Apr16 17:57] systemd-fstab-generator[1315]: Ignoring "noauto" option for root device
	[  +0.100359] kauditd_printk_skb: 205 callbacks suppressed
	[  +2.927133] systemd-fstab-generator[1513]: Ignoring "noauto" option for root device
	[  +5.699753] systemd-fstab-generator[1707]: Ignoring "noauto" option for root device
	[  +0.085837] kauditd_printk_skb: 73 callbacks suppressed
	[  +7.760431] systemd-fstab-generator[2107]: Ignoring "noauto" option for root device
	[  +0.135160] kauditd_printk_skb: 62 callbacks suppressed
	[  +7.450297] hrtimer: interrupt took 987259 ns
	[  +5.262610] systemd-fstab-generator[2292]: Ignoring "noauto" option for root device
	[  +0.195654] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.560394] kauditd_printk_skb: 51 callbacks suppressed
	[Apr16 18:01] kauditd_printk_skb: 23 callbacks suppressed
	
	
	==> etcd [0cae708a3787] <==
	{"level":"info","ts":"2024-04-16T17:57:22.024751Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 switched to configuration voters=(16790251013889734582)"}
	{"level":"info","ts":"2024-04-16T17:57:22.037022Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ba3fb579e58fbd76","local-member-id":"e902f456ac8a37b6","added-peer-id":"e902f456ac8a37b6","added-peer-peer-urls":["https://172.19.91.227:2380"]}
	{"level":"info","ts":"2024-04-16T17:57:22.036585Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-16T17:57:22.037467Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"e902f456ac8a37b6","initial-advertise-peer-urls":["https://172.19.91.227:2380"],"listen-peer-urls":["https://172.19.91.227:2380"],"advertise-client-urls":["https://172.19.91.227:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.91.227:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-16T17:57:22.037573Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-16T17:57:22.036608Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.19.91.227:2380"}
	{"level":"info","ts":"2024-04-16T17:57:22.037796Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.19.91.227:2380"}
	{"level":"info","ts":"2024-04-16T17:57:22.485441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-16T17:57:22.485773Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-16T17:57:22.486062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 received MsgPreVoteResp from e902f456ac8a37b6 at term 1"}
	{"level":"info","ts":"2024-04-16T17:57:22.486206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 became candidate at term 2"}
	{"level":"info","ts":"2024-04-16T17:57:22.486317Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 received MsgVoteResp from e902f456ac8a37b6 at term 2"}
	{"level":"info","ts":"2024-04-16T17:57:22.486459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 became leader at term 2"}
	{"level":"info","ts":"2024-04-16T17:57:22.486613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e902f456ac8a37b6 elected leader e902f456ac8a37b6 at term 2"}
	{"level":"info","ts":"2024-04-16T17:57:22.492605Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e902f456ac8a37b6","local-member-attributes":"{Name:multinode-945500 ClientURLs:[https://172.19.91.227:2379]}","request-path":"/0/members/e902f456ac8a37b6/attributes","cluster-id":"ba3fb579e58fbd76","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-16T17:57:22.493027Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T17:57:22.493291Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-16T17:57:22.495438Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-16T17:57:22.493174Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T17:57:22.501637Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-16T17:57:22.494099Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T17:57:22.508993Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.91.227:2379"}
	{"level":"info","ts":"2024-04-16T17:57:22.537458Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ba3fb579e58fbd76","local-member-id":"e902f456ac8a37b6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T17:57:22.537767Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T17:57:22.540447Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 18:06:55 up 11 min,  0 users,  load average: 0.07, 0.20, 0.16
	Linux multinode-945500 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [cd37920f1d54] <==
	I0416 18:05:48.668238       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:05:58.679207       1 main.go:223] Handling node with IPs: map[172.19.91.227:{}]
	I0416 18:05:58.679247       1 main.go:227] handling current node
	I0416 18:05:58.679258       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:05:58.679265       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:06:08.692446       1 main.go:223] Handling node with IPs: map[172.19.91.227:{}]
	I0416 18:06:08.692477       1 main.go:227] handling current node
	I0416 18:06:08.692488       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:06:08.692494       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:06:18.698964       1 main.go:223] Handling node with IPs: map[172.19.91.227:{}]
	I0416 18:06:18.699070       1 main.go:227] handling current node
	I0416 18:06:18.699085       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:06:18.699093       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:06:28.712752       1 main.go:223] Handling node with IPs: map[172.19.91.227:{}]
	I0416 18:06:28.712839       1 main.go:227] handling current node
	I0416 18:06:28.712852       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:06:28.712859       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:06:38.719236       1 main.go:223] Handling node with IPs: map[172.19.91.227:{}]
	I0416 18:06:38.719272       1 main.go:227] handling current node
	I0416 18:06:38.719282       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:06:38.719288       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:06:48.725875       1 main.go:223] Handling node with IPs: map[172.19.91.227:{}]
	I0416 18:06:48.725965       1 main.go:227] handling current node
	I0416 18:06:48.725977       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:06:48.725984       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [736259e5d03b] <==
	I0416 17:57:24.492548       1 shared_informer.go:318] Caches are synced for configmaps
	I0416 17:57:24.493015       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0416 17:57:24.493164       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0416 17:57:24.493567       1 aggregator.go:165] initial CRD sync complete...
	I0416 17:57:24.493754       1 autoregister_controller.go:141] Starting autoregister controller
	I0416 17:57:24.493855       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0416 17:57:24.493948       1 cache.go:39] Caches are synced for autoregister controller
	I0416 17:57:24.498835       1 controller.go:624] quota admission added evaluator for: namespaces
	I0416 17:57:24.572544       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0416 17:57:24.581941       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0416 17:57:25.383934       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0416 17:57:25.391363       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0416 17:57:25.391584       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0416 17:57:26.186472       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0416 17:57:26.241100       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0416 17:57:26.380286       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0416 17:57:26.389156       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.91.227]
	I0416 17:57:26.390446       1 controller.go:624] quota admission added evaluator for: endpoints
	I0416 17:57:26.395894       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0416 17:57:26.463024       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0416 17:57:27.978875       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0416 17:57:27.996061       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0416 17:57:28.010130       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0416 17:57:40.322187       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0416 17:57:40.406944       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [91288754cb0b] <==
	I0416 17:57:41.176487       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="38.505µs"
	I0416 17:57:50.419156       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="87.708µs"
	I0416 17:57:50.439046       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="77.007µs"
	I0416 17:57:52.289724       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="340.797µs"
	I0416 17:57:52.327958       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="8.879815ms"
	I0416 17:57:52.329283       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="77.899µs"
	I0416 17:57:54.522679       1 node_lifecycle_controller.go:1045] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0416 18:00:21.143291       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-945500-m02\" does not exist"
	I0416 18:00:21.160886       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7pg6g"
	I0416 18:00:21.165863       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-q5bdr"
	I0416 18:00:21.190337       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-945500-m02" podCIDRs=["10.244.1.0/24"]
	I0416 18:00:24.552622       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-945500-m02"
	I0416 18:00:24.552697       1 event.go:376] "Event occurred" object="multinode-945500-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-945500-m02 event: Registered Node multinode-945500-m02 in Controller"
	I0416 18:00:41.273225       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-945500-m02"
	I0416 18:01:05.000162       1 event.go:376] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7fdf7869d9 to 2"
	I0416 18:01:05.018037       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-ns8nx"
	I0416 18:01:05.041877       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-jxvx2"
	I0416 18:01:05.061957       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="58.524499ms"
	I0416 18:01:05.079880       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="17.398354ms"
	I0416 18:01:05.080339       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="67.502µs"
	I0416 18:01:05.093042       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="55.802µs"
	I0416 18:01:07.013162       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="7.557663ms"
	I0416 18:01:07.014558       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="1.14747ms"
	I0416 18:01:07.433900       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="7.930386ms"
	I0416 18:01:07.434257       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="37.403µs"
	
	
	==> kube-proxy [f56880607ce1] <==
	I0416 17:57:41.776688       1 server_others.go:72] "Using iptables proxy"
	I0416 17:57:41.792626       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.91.227"]
	I0416 17:57:41.867257       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 17:57:41.867331       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 17:57:41.867350       1 server_others.go:168] "Using iptables Proxier"
	I0416 17:57:41.871330       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 17:57:41.872230       1 server.go:865] "Version info" version="v1.29.3"
	I0416 17:57:41.872370       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 17:57:41.874113       1 config.go:188] "Starting service config controller"
	I0416 17:57:41.874135       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 17:57:41.874160       1 config.go:97] "Starting endpoint slice config controller"
	I0416 17:57:41.874165       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 17:57:41.876871       1 config.go:315] "Starting node config controller"
	I0416 17:57:41.876896       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 17:57:41.974693       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 17:57:41.974749       1 shared_informer.go:318] Caches are synced for service config
	I0416 17:57:41.977426       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [4a7c8d9808b6] <==
	W0416 17:57:25.449324       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0416 17:57:25.449598       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0416 17:57:25.655533       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 17:57:25.656479       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 17:57:25.692827       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0416 17:57:25.693097       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0416 17:57:25.711042       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0416 17:57:25.711136       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0416 17:57:25.720155       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 17:57:25.720353       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 17:57:25.721550       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 17:57:25.721738       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 17:57:25.738855       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 17:57:25.738995       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0416 17:57:25.765058       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0416 17:57:25.765096       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0416 17:57:25.774340       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0416 17:57:25.774569       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0416 17:57:25.791990       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0416 17:57:25.792031       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0416 17:57:25.929298       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 17:57:25.929342       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 17:57:26.119349       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 17:57:26.119818       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0416 17:57:29.235915       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 18:02:28 multinode-945500 kubelet[2114]: E0416 18:02:28.261580    2114 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 18:02:28 multinode-945500 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 18:02:28 multinode-945500 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 18:02:28 multinode-945500 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 18:02:28 multinode-945500 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 18:03:28 multinode-945500 kubelet[2114]: E0416 18:03:28.265624    2114 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 18:03:28 multinode-945500 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 18:03:28 multinode-945500 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 18:03:28 multinode-945500 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 18:03:28 multinode-945500 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 18:04:28 multinode-945500 kubelet[2114]: E0416 18:04:28.262267    2114 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 18:04:28 multinode-945500 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 18:04:28 multinode-945500 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 18:04:28 multinode-945500 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 18:04:28 multinode-945500 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 18:05:28 multinode-945500 kubelet[2114]: E0416 18:05:28.265449    2114 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 18:05:28 multinode-945500 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 18:05:28 multinode-945500 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 18:05:28 multinode-945500 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 18:05:28 multinode-945500 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 18:06:28 multinode-945500 kubelet[2114]: E0416 18:06:28.261760    2114 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 18:06:28 multinode-945500 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 18:06:28 multinode-945500 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 18:06:28 multinode-945500 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 18:06:28 multinode-945500 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 18:06:48.250805    2952 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-945500 -n multinode-945500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-945500 -n multinode-945500: (10.9566886s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-945500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/CopyFile (62.79s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (259.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-945500 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-945500 node start m03 -v=7 --alsologtostderr: exit status 90 (2m43.2358246s)

                                                
                                                
-- stdout --
	* Starting "multinode-945500-m03" worker node in "multinode-945500" cluster
	* Restarting existing hyperv VM for "multinode-945500-m03" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 18:08:21.200032    1980 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0416 18:08:21.260672    1980 out.go:291] Setting OutFile to fd 880 ...
	I0416 18:08:21.277799    1980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 18:08:21.277872    1980 out.go:304] Setting ErrFile to fd 984...
	I0416 18:08:21.277872    1980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 18:08:21.291439    1980 mustload.go:65] Loading cluster: multinode-945500
	I0416 18:08:21.292005    1980 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:08:21.292585    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:08:23.212132    1980 main.go:141] libmachine: [stdout =====>] : Off
	
	I0416 18:08:23.212346    1980 main.go:141] libmachine: [stderr =====>] : 
	W0416 18:08:23.212446    1980 host.go:58] "multinode-945500-m03" host status: Stopped
	I0416 18:08:23.214032    1980 out.go:177] * Starting "multinode-945500-m03" worker node in "multinode-945500" cluster
	I0416 18:08:23.214623    1980 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 18:08:23.214849    1980 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0416 18:08:23.214919    1980 cache.go:56] Caching tarball of preloaded images
	I0416 18:08:23.215349    1980 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 18:08:23.215432    1980 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 18:08:23.215698    1980 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 18:08:23.217642    1980 start.go:360] acquireMachinesLock for multinode-945500-m03: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 18:08:23.217642    1980 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-945500-m03"
	I0416 18:08:23.217642    1980 start.go:96] Skipping create...Using existing machine configuration
	I0416 18:08:23.217642    1980 fix.go:54] fixHost starting: m03
	I0416 18:08:23.218265    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:08:25.185453    1980 main.go:141] libmachine: [stdout =====>] : Off
	
	I0416 18:08:25.185453    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:08:25.186222    1980 fix.go:112] recreateIfNeeded on multinode-945500-m03: state=Stopped err=<nil>
	W0416 18:08:25.186222    1980 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 18:08:25.186944    1980 out.go:177] * Restarting existing hyperv VM for "multinode-945500-m03" ...
	I0416 18:08:25.187078    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-945500-m03
	I0416 18:08:27.830059    1980 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:08:27.830059    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:08:27.830059    1980 main.go:141] libmachine: Waiting for host to start...
	I0416 18:08:27.830059    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:08:29.878815    1980 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:08:29.878815    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:08:29.879808    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:08:32.170233    1980 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:08:32.170675    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:08:33.177916    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:08:35.227134    1980 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:08:35.227134    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:08:35.227134    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:08:37.483114    1980 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:08:37.483114    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:08:38.496890    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:08:40.475821    1980 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:08:40.475821    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:08:40.475821    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:08:42.758448    1980 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:08:42.758448    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:08:43.767643    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:08:45.755322    1980 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:08:45.755322    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:08:45.755578    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:08:48.081436    1980 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:08:48.081808    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:08:49.094012    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:08:51.104054    1980 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:08:51.104622    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:08:51.104622    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:08:53.471885    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139
	
	I0416 18:08:53.471885    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:08:53.474921    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:08:55.417944    1980 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:08:55.418763    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:08:55.418763    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:08:57.743429    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139
	
	I0416 18:08:57.743429    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:08:57.743429    1980 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 18:08:57.747790    1980 machine.go:94] provisionDockerMachine start ...
	I0416 18:08:57.748284    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:08:59.672689    1980 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:08:59.672689    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:08:59.672839    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:09:01.990567    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139
	
	I0416 18:09:01.990567    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:09:01.997025    1980 main.go:141] libmachine: Using SSH client type: native
	I0416 18:09:01.997665    1980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.85.139 22 <nil> <nil>}
	I0416 18:09:01.997665    1980 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 18:09:02.141312    1980 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 18:09:02.141312    1980 buildroot.go:166] provisioning hostname "multinode-945500-m03"
	I0416 18:09:02.141910    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:09:04.030915    1980 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:09:04.031402    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:09:04.031477    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:09:06.388788    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139
	
	I0416 18:09:06.388788    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:09:06.393355    1980 main.go:141] libmachine: Using SSH client type: native
	I0416 18:09:06.393767    1980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.85.139 22 <nil> <nil>}
	I0416 18:09:06.393767    1980 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-945500-m03 && echo "multinode-945500-m03" | sudo tee /etc/hostname
	I0416 18:09:06.559389    1980 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-945500-m03
	
	I0416 18:09:06.559928    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:09:08.526172    1980 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:09:08.526367    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:09:08.526440    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:09:10.910644    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139
	
	I0416 18:09:10.910715    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:09:10.914941    1980 main.go:141] libmachine: Using SSH client type: native
	I0416 18:09:10.914941    1980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.85.139 22 <nil> <nil>}
	I0416 18:09:10.914941    1980 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-945500-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-945500-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-945500-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 18:09:11.072110    1980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 18:09:11.072110    1980 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 18:09:11.072110    1980 buildroot.go:174] setting up certificates
	I0416 18:09:11.072110    1980 provision.go:84] configureAuth start
	I0416 18:09:11.072110    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:09:13.051573    1980 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:09:13.052475    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:09:13.052475    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:09:15.349814    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139
	
	I0416 18:09:15.349814    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:09:15.350865    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:09:17.314391    1980 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:09:17.314391    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:09:17.314465    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:09:19.588678    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139
	
	I0416 18:09:19.588678    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:09:19.588678    1980 provision.go:143] copyHostCerts
	I0416 18:09:19.589310    1980 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 18:09:19.589676    1980 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 18:09:19.589676    1980 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 18:09:19.590137    1980 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 18:09:19.590592    1980 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 18:09:19.591148    1980 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 18:09:19.591148    1980 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 18:09:19.591388    1980 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 18:09:19.592175    1980 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 18:09:19.592397    1980 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 18:09:19.592469    1980 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 18:09:19.592724    1980 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 18:09:19.593606    1980 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-945500-m03 san=[127.0.0.1 172.19.85.139 localhost minikube multinode-945500-m03]
	I0416 18:09:19.845929    1980 provision.go:177] copyRemoteCerts
	I0416 18:09:19.853925    1980 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 18:09:19.853925    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:09:21.805292    1980 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:09:21.805292    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:09:21.805292    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:09:24.116279    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139
	
	I0416 18:09:24.116279    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:09:24.116360    1980 sshutil.go:53] new ssh client: &{IP:172.19.85.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m03\id_rsa Username:docker}
	I0416 18:09:24.226154    1980 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3719812s)
	I0416 18:09:24.226154    1980 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 18:09:24.226893    1980 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0416 18:09:24.269712    1980 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 18:09:24.269972    1980 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 18:09:24.311281    1980 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 18:09:24.311281    1980 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 18:09:24.354376    1980 provision.go:87] duration metric: took 13.281513s to configureAuth
	I0416 18:09:24.354376    1980 buildroot.go:189] setting minikube options for container-runtime
	I0416 18:09:24.355269    1980 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:09:24.355269    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:09:26.289197    1980 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:09:26.289197    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:09:26.289527    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:09:28.583688    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139
	
	I0416 18:09:28.583688    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:09:28.589362    1980 main.go:141] libmachine: Using SSH client type: native
	I0416 18:09:28.589513    1980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.85.139 22 <nil> <nil>}
	I0416 18:09:28.589513    1980 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 18:09:28.723367    1980 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 18:09:28.723446    1980 buildroot.go:70] root file system type: tmpfs
	I0416 18:09:28.723592    1980 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 18:09:28.723730    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:09:30.655087    1980 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:09:30.656057    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:09:30.656153    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:09:32.953144    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139
	
	I0416 18:09:32.953144    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:09:32.957644    1980 main.go:141] libmachine: Using SSH client type: native
	I0416 18:09:32.958242    1980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.85.139 22 <nil> <nil>}
	I0416 18:09:32.958242    1980 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 18:09:33.143894    1980 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 18:09:33.143894    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:09:35.076353    1980 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:09:35.076353    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:09:35.076429    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:09:37.396132    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139
	
	I0416 18:09:37.396132    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:09:37.401275    1980 main.go:141] libmachine: Using SSH client type: native
	I0416 18:09:37.401443    1980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.85.139 22 <nil> <nil>}
	I0416 18:09:37.401443    1980 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 18:09:39.375761    1980 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 18:09:39.375821    1980 machine.go:97] duration metric: took 41.6251677s to provisionDockerMachine
	I0416 18:09:39.375860    1980 start.go:293] postStartSetup for "multinode-945500-m03" (driver="hyperv")
	I0416 18:09:39.375860    1980 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 18:09:39.384599    1980 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 18:09:39.384599    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:09:41.282881    1980 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:09:41.282881    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:09:41.283117    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:09:43.596609    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139
	
	I0416 18:09:43.596609    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:09:43.597496    1980 sshutil.go:53] new ssh client: &{IP:172.19.85.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m03\id_rsa Username:docker}
	I0416 18:09:43.704682    1980 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3198374s)
	I0416 18:09:43.717278    1980 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 18:09:43.724527    1980 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 18:09:43.724527    1980 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 18:09:43.725366    1980 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 18:09:43.726047    1980 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 18:09:43.726113    1980 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 18:09:43.735127    1980 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 18:09:43.752113    1980 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 18:09:43.793405    1980 start.go:296] duration metric: took 4.4172952s for postStartSetup
	I0416 18:09:43.793405    1980 fix.go:56] duration metric: took 1m20.5711944s for fixHost
	I0416 18:09:43.793405    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:09:45.728174    1980 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:09:45.728247    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:09:45.728269    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:09:48.080398    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139
	
	I0416 18:09:48.080398    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:09:48.083919    1980 main.go:141] libmachine: Using SSH client type: native
	I0416 18:09:48.084508    1980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.85.139 22 <nil> <nil>}
	I0416 18:09:48.084508    1980 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0416 18:09:48.216241    1980 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713290988.388576117
	
	I0416 18:09:48.216241    1980 fix.go:216] guest clock: 1713290988.388576117
	I0416 18:09:48.216761    1980 fix.go:229] Guest: 2024-04-16 18:09:48.388576117 +0000 UTC Remote: 2024-04-16 18:09:43.7934058 +0000 UTC m=+82.674270401 (delta=4.595170317s)
	I0416 18:09:48.216761    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:09:50.131857    1980 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:09:50.132896    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:09:50.132989    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:09:52.457955    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139
	
	I0416 18:09:52.457955    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:09:52.462347    1980 main.go:141] libmachine: Using SSH client type: native
	I0416 18:09:52.462347    1980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.85.139 22 <nil> <nil>}
	I0416 18:09:52.462347    1980 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713290988
	I0416 18:09:52.614307    1980 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 18:09:48 UTC 2024
	
	I0416 18:09:52.614307    1980 fix.go:236] clock set: Tue Apr 16 18:09:48 UTC 2024
	 (err=<nil>)
	I0416 18:09:52.614307    1980 start.go:83] releasing machines lock for "multinode-945500-m03", held for 1m29.3915964s
	I0416 18:09:52.614545    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:09:54.596003    1980 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:09:54.596003    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:09:54.596506    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:09:56.876127    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139
	
	I0416 18:09:56.876127    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:09:56.878749    1980 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 18:09:56.879368    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:09:56.889809    1980 ssh_runner.go:195] Run: systemctl --version
	I0416 18:09:56.889809    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:09:58.855356    1980 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:09:58.855356    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:09:58.855356    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:09:58.855787    1980 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:09:58.855787    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:09:58.856380    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:10:01.233174    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139
	
	I0416 18:10:01.233174    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:10:01.233174    1980 sshutil.go:53] new ssh client: &{IP:172.19.85.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m03\id_rsa Username:docker}
	I0416 18:10:01.257976    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139
	
	I0416 18:10:01.257976    1980 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:10:01.257976    1980 sshutil.go:53] new ssh client: &{IP:172.19.85.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m03\id_rsa Username:docker}
	I0416 18:10:01.411663    1980 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5321339s)
	I0416 18:10:01.411663    1980 ssh_runner.go:235] Completed: systemctl --version: (4.5215973s)
	I0416 18:10:01.421808    1980 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 18:10:01.430456    1980 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 18:10:01.440844    1980 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 18:10:01.469266    1980 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 18:10:01.469266    1980 start.go:494] detecting cgroup driver to use...
	I0416 18:10:01.469266    1980 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 18:10:01.512612    1980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 18:10:01.541998    1980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 18:10:01.561417    1980 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 18:10:01.569953    1980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 18:10:01.599794    1980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 18:10:01.629537    1980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 18:10:01.667219    1980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 18:10:01.697555    1980 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 18:10:01.726755    1980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 18:10:01.756928    1980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 18:10:01.787589    1980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 18:10:01.816595    1980 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 18:10:01.845601    1980 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 18:10:01.873858    1980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:10:02.064377    1980 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 18:10:02.098365    1980 start.go:494] detecting cgroup driver to use...
	I0416 18:10:02.109429    1980 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 18:10:02.145335    1980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 18:10:02.178542    1980 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 18:10:02.219019    1980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 18:10:02.252745    1980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 18:10:02.287708    1980 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 18:10:02.347622    1980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 18:10:02.372030    1980 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 18:10:02.416562    1980 ssh_runner.go:195] Run: which cri-dockerd
	I0416 18:10:02.432701    1980 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 18:10:02.451957    1980 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 18:10:02.499759    1980 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 18:10:02.689385    1980 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 18:10:02.871997    1980 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 18:10:02.872193    1980 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 18:10:02.917213    1980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:10:03.123856    1980 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 18:11:04.260310    1980 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1329211s)
	I0416 18:11:04.269487    1980 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0416 18:11:04.299900    1980 out.go:177] 
	W0416 18:11:04.300897    1980 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 16 18:09:38 multinode-945500-m03 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 18:09:38 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:38.125979206Z" level=info msg="Starting up"
	Apr 16 18:09:38 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:38.127492304Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 18:09:38 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:38.128827678Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=655
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.165717401Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.194194024Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.194907917Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.195203556Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.195285467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.196243192Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.196276296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.196454219Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.196553932Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.196573035Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.196584136Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.196964786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.197581267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.200663570Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.200760682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.200901401Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.200992713Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.201519282Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.201622195Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.201637897Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.203658161Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.203822583Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.203846386Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.203865388Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.203881090Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.203951099Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204538576Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204688996Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204804011Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204827814Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204844016Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204858318Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204872120Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204888422Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204904124Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204917626Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204931728Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204944529Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204971533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204987835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205003437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205021039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205034041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205047543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205059544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205181460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205199463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205214265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205230367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205242368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205254570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205282674Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205303676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205316178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205328179Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205371985Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205389888Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205402289Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205413191Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205468898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205682426Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205702128Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.206192893Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.206329110Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.206371216Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.206402020Z" level=info msg="containerd successfully booted in 0.045005s"
	Apr 16 18:09:39 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:39.174862304Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 18:09:39 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:39.194379632Z" level=info msg="Loading containers: start."
	Apr 16 18:09:39 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:39.405597882Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 16 18:09:39 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:39.477348860Z" level=info msg="Loading containers: done."
	Apr 16 18:09:39 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:39.496174541Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 18:09:39 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:39.496965741Z" level=info msg="Daemon has completed initialization"
	Apr 16 18:09:39 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:39.545898932Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 18:09:39 multinode-945500-m03 systemd[1]: Started Docker Application Container Engine.
	Apr 16 18:09:39 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:39.547974794Z" level=info msg="API listen on [::]:2376"
	Apr 16 18:10:03 multinode-945500-m03 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 18:10:03 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:10:03.322970590Z" level=info msg="Processing signal 'terminated'"
	Apr 16 18:10:03 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:10:03.324448537Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 18:10:03 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:10:03.324734446Z" level=info msg="Daemon shutdown complete"
	Apr 16 18:10:03 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:10:03.324802648Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 18:10:03 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:10:03.324893151Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 18:10:04 multinode-945500-m03 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 18:10:04 multinode-945500-m03 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 18:10:04 multinode-945500-m03 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 18:10:04 multinode-945500-m03 dockerd[1027]: time="2024-04-16T18:10:04.401537010Z" level=info msg="Starting up"
	Apr 16 18:11:04 multinode-945500-m03 dockerd[1027]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 16 18:11:04 multinode-945500-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 16 18:11:04 multinode-945500-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 16 18:11:04 multinode-945500-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 16 18:09:38 multinode-945500-m03 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 18:09:38 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:38.125979206Z" level=info msg="Starting up"
	Apr 16 18:09:38 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:38.127492304Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 18:09:38 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:38.128827678Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=655
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.165717401Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.194194024Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.194907917Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.195203556Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.195285467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.196243192Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.196276296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.196454219Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.196553932Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.196573035Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.196584136Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.196964786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.197581267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.200663570Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.200760682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.200901401Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.200992713Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.201519282Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.201622195Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.201637897Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.203658161Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.203822583Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.203846386Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.203865388Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.203881090Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.203951099Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204538576Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204688996Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204804011Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204827814Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204844016Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204858318Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204872120Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204888422Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204904124Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204917626Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204931728Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204944529Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204971533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204987835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205003437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205021039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205034041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205047543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205059544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205181460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205199463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205214265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205230367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205242368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205254570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205282674Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205303676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205316178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205328179Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205371985Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205389888Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205402289Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205413191Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205468898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205682426Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205702128Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.206192893Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.206329110Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.206371216Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.206402020Z" level=info msg="containerd successfully booted in 0.045005s"
	Apr 16 18:09:39 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:39.174862304Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 18:09:39 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:39.194379632Z" level=info msg="Loading containers: start."
	Apr 16 18:09:39 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:39.405597882Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 16 18:09:39 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:39.477348860Z" level=info msg="Loading containers: done."
	Apr 16 18:09:39 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:39.496174541Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 18:09:39 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:39.496965741Z" level=info msg="Daemon has completed initialization"
	Apr 16 18:09:39 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:39.545898932Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 18:09:39 multinode-945500-m03 systemd[1]: Started Docker Application Container Engine.
	Apr 16 18:09:39 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:39.547974794Z" level=info msg="API listen on [::]:2376"
	Apr 16 18:10:03 multinode-945500-m03 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 18:10:03 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:10:03.322970590Z" level=info msg="Processing signal 'terminated'"
	Apr 16 18:10:03 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:10:03.324448537Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 18:10:03 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:10:03.324734446Z" level=info msg="Daemon shutdown complete"
	Apr 16 18:10:03 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:10:03.324802648Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 18:10:03 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:10:03.324893151Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 18:10:04 multinode-945500-m03 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 18:10:04 multinode-945500-m03 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 18:10:04 multinode-945500-m03 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 18:10:04 multinode-945500-m03 dockerd[1027]: time="2024-04-16T18:10:04.401537010Z" level=info msg="Starting up"
	Apr 16 18:11:04 multinode-945500-m03 dockerd[1027]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 16 18:11:04 multinode-945500-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 16 18:11:04 multinode-945500-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 16 18:11:04 multinode-945500-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0416 18:11:04.300897    1980 out.go:239] * 
	* 
	W0416 18:11:04.312642    1980 out.go:239] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_node_d3371be9b91d3e65188b37d2edd0282838d23ad8_1.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_node_d3371be9b91d3e65188b37d2edd0282838d23ad8_1.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 18:11:04.313713    1980 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: W0416 18:08:21.200032    1980 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0416 18:08:21.260672    1980 out.go:291] Setting OutFile to fd 880 ...
I0416 18:08:21.277799    1980 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 18:08:21.277872    1980 out.go:304] Setting ErrFile to fd 984...
I0416 18:08:21.277872    1980 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 18:08:21.291439    1980 mustload.go:65] Loading cluster: multinode-945500
I0416 18:08:21.292005    1980 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0416 18:08:21.292585    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
I0416 18:08:23.212132    1980 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0416 18:08:23.212346    1980 main.go:141] libmachine: [stderr =====>] : 
W0416 18:08:23.212446    1980 host.go:58] "multinode-945500-m03" host status: Stopped
I0416 18:08:23.214032    1980 out.go:177] * Starting "multinode-945500-m03" worker node in "multinode-945500" cluster
I0416 18:08:23.214623    1980 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
I0416 18:08:23.214849    1980 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
I0416 18:08:23.214919    1980 cache.go:56] Caching tarball of preloaded images
I0416 18:08:23.215349    1980 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0416 18:08:23.215432    1980 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
I0416 18:08:23.215698    1980 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
I0416 18:08:23.217642    1980 start.go:360] acquireMachinesLock for multinode-945500-m03: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0416 18:08:23.217642    1980 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-945500-m03"
I0416 18:08:23.217642    1980 start.go:96] Skipping create...Using existing machine configuration
I0416 18:08:23.217642    1980 fix.go:54] fixHost starting: m03
I0416 18:08:23.218265    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
I0416 18:08:25.185453    1980 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0416 18:08:25.185453    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:08:25.186222    1980 fix.go:112] recreateIfNeeded on multinode-945500-m03: state=Stopped err=<nil>
W0416 18:08:25.186222    1980 fix.go:138] unexpected machine state, will restart: <nil>
I0416 18:08:25.186944    1980 out.go:177] * Restarting existing hyperv VM for "multinode-945500-m03" ...
I0416 18:08:25.187078    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-945500-m03
I0416 18:08:27.830059    1980 main.go:141] libmachine: [stdout =====>] : 
I0416 18:08:27.830059    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:08:27.830059    1980 main.go:141] libmachine: Waiting for host to start...
I0416 18:08:27.830059    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
I0416 18:08:29.878815    1980 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 18:08:29.878815    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:08:29.879808    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
I0416 18:08:32.170233    1980 main.go:141] libmachine: [stdout =====>] : 
I0416 18:08:32.170675    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:08:33.177916    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
I0416 18:08:35.227134    1980 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 18:08:35.227134    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:08:35.227134    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
I0416 18:08:37.483114    1980 main.go:141] libmachine: [stdout =====>] : 
I0416 18:08:37.483114    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:08:38.496890    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
I0416 18:08:40.475821    1980 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 18:08:40.475821    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:08:40.475821    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
I0416 18:08:42.758448    1980 main.go:141] libmachine: [stdout =====>] : 
I0416 18:08:42.758448    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:08:43.767643    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
I0416 18:08:45.755322    1980 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 18:08:45.755322    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:08:45.755578    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
I0416 18:08:48.081436    1980 main.go:141] libmachine: [stdout =====>] : 
I0416 18:08:48.081808    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:08:49.094012    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
I0416 18:08:51.104054    1980 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 18:08:51.104622    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:08:51.104622    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
I0416 18:08:53.471885    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139

                                                
                                                
I0416 18:08:53.471885    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:08:53.474921    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
I0416 18:08:55.417944    1980 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 18:08:55.418763    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:08:55.418763    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
I0416 18:08:57.743429    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139

                                                
                                                
I0416 18:08:57.743429    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:08:57.743429    1980 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
I0416 18:08:57.747790    1980 machine.go:94] provisionDockerMachine start ...
I0416 18:08:57.748284    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
I0416 18:08:59.672689    1980 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 18:08:59.672689    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:08:59.672839    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
I0416 18:09:01.990567    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139

                                                
                                                
I0416 18:09:01.990567    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:09:01.997025    1980 main.go:141] libmachine: Using SSH client type: native
I0416 18:09:01.997665    1980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.85.139 22 <nil> <nil>}
I0416 18:09:01.997665    1980 main.go:141] libmachine: About to run SSH command:
hostname
I0416 18:09:02.141312    1980 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube

                                                
                                                
I0416 18:09:02.141312    1980 buildroot.go:166] provisioning hostname "multinode-945500-m03"
I0416 18:09:02.141910    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
I0416 18:09:04.030915    1980 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 18:09:04.031402    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:09:04.031477    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
I0416 18:09:06.388788    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139

                                                
                                                
I0416 18:09:06.388788    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:09:06.393355    1980 main.go:141] libmachine: Using SSH client type: native
I0416 18:09:06.393767    1980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.85.139 22 <nil> <nil>}
I0416 18:09:06.393767    1980 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-945500-m03 && echo "multinode-945500-m03" | sudo tee /etc/hostname
I0416 18:09:06.559389    1980 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-945500-m03

                                                
                                                
I0416 18:09:06.559928    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
I0416 18:09:08.526172    1980 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 18:09:08.526367    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:09:08.526440    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
I0416 18:09:10.910644    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139

                                                
                                                
I0416 18:09:10.910715    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:09:10.914941    1980 main.go:141] libmachine: Using SSH client type: native
I0416 18:09:10.914941    1980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.85.139 22 <nil> <nil>}
I0416 18:09:10.914941    1980 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\smultinode-945500-m03' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-945500-m03/g' /etc/hosts;
			else 
				echo '127.0.1.1 multinode-945500-m03' | sudo tee -a /etc/hosts; 
			fi
		fi
I0416 18:09:11.072110    1980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0416 18:09:11.072110    1980 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
I0416 18:09:11.072110    1980 buildroot.go:174] setting up certificates
I0416 18:09:11.072110    1980 provision.go:84] configureAuth start
I0416 18:09:11.072110    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
I0416 18:09:13.051573    1980 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 18:09:13.052475    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:09:13.052475    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
I0416 18:09:15.349814    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139

                                                
                                                
I0416 18:09:15.349814    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:09:15.350865    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
I0416 18:09:17.314391    1980 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 18:09:17.314391    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:09:17.314465    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
I0416 18:09:19.588678    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139

                                                
                                                
I0416 18:09:19.588678    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:09:19.588678    1980 provision.go:143] copyHostCerts
I0416 18:09:19.589310    1980 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
I0416 18:09:19.589676    1980 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
I0416 18:09:19.589676    1980 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
I0416 18:09:19.590137    1980 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
I0416 18:09:19.590592    1980 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
I0416 18:09:19.591148    1980 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
I0416 18:09:19.591148    1980 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
I0416 18:09:19.591388    1980 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
I0416 18:09:19.592175    1980 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
I0416 18:09:19.592397    1980 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
I0416 18:09:19.592469    1980 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
I0416 18:09:19.592724    1980 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
I0416 18:09:19.593606    1980 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-945500-m03 san=[127.0.0.1 172.19.85.139 localhost minikube multinode-945500-m03]
I0416 18:09:19.845929    1980 provision.go:177] copyRemoteCerts
I0416 18:09:19.853925    1980 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0416 18:09:19.853925    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
I0416 18:09:21.805292    1980 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 18:09:21.805292    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:09:21.805292    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
I0416 18:09:24.116279    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139

                                                
                                                
I0416 18:09:24.116279    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:09:24.116360    1980 sshutil.go:53] new ssh client: &{IP:172.19.85.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m03\id_rsa Username:docker}
I0416 18:09:24.226154    1980 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3719812s)
I0416 18:09:24.226154    1980 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
I0416 18:09:24.226893    1980 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
I0416 18:09:24.269712    1980 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
I0416 18:09:24.269972    1980 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0416 18:09:24.311281    1980 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
I0416 18:09:24.311281    1980 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0416 18:09:24.354376    1980 provision.go:87] duration metric: took 13.281513s to configureAuth
I0416 18:09:24.354376    1980 buildroot.go:189] setting minikube options for container-runtime
I0416 18:09:24.355269    1980 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0416 18:09:24.355269    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
I0416 18:09:26.289197    1980 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 18:09:26.289197    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:09:26.289527    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
I0416 18:09:28.583688    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139

                                                
                                                
I0416 18:09:28.583688    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:09:28.589362    1980 main.go:141] libmachine: Using SSH client type: native
I0416 18:09:28.589513    1980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.85.139 22 <nil> <nil>}
I0416 18:09:28.589513    1980 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0416 18:09:28.723367    1980 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs

                                                
                                                
I0416 18:09:28.723446    1980 buildroot.go:70] root file system type: tmpfs
I0416 18:09:28.723592    1980 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0416 18:09:28.723730    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
I0416 18:09:30.655087    1980 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 18:09:30.656057    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:09:30.656153    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
I0416 18:09:32.953144    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139

                                                
                                                
I0416 18:09:32.953144    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:09:32.957644    1980 main.go:141] libmachine: Using SSH client type: native
I0416 18:09:32.958242    1980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.85.139 22 <nil> <nil>}
I0416 18:09:32.958242    1980 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0416 18:09:33.143894    1980 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I0416 18:09:33.143894    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
I0416 18:09:35.076353    1980 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 18:09:35.076353    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:09:35.076429    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
I0416 18:09:37.396132    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139

                                                
                                                
I0416 18:09:37.396132    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:09:37.401275    1980 main.go:141] libmachine: Using SSH client type: native
I0416 18:09:37.401443    1980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.85.139 22 <nil> <nil>}
I0416 18:09:37.401443    1980 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0416 18:09:39.375761    1980 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

                                                
                                                
I0416 18:09:39.375821    1980 machine.go:97] duration metric: took 41.6251677s to provisionDockerMachine
I0416 18:09:39.375860    1980 start.go:293] postStartSetup for "multinode-945500-m03" (driver="hyperv")
I0416 18:09:39.375860    1980 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0416 18:09:39.384599    1980 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0416 18:09:39.384599    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
I0416 18:09:41.282881    1980 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 18:09:41.282881    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:09:41.283117    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
I0416 18:09:43.596609    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139

                                                
                                                
I0416 18:09:43.596609    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:09:43.597496    1980 sshutil.go:53] new ssh client: &{IP:172.19.85.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m03\id_rsa Username:docker}
I0416 18:09:43.704682    1980 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3198374s)
I0416 18:09:43.717278    1980 ssh_runner.go:195] Run: cat /etc/os-release
I0416 18:09:43.724527    1980 info.go:137] Remote host: Buildroot 2023.02.9
I0416 18:09:43.724527    1980 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
I0416 18:09:43.725366    1980 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
I0416 18:09:43.726047    1980 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
I0416 18:09:43.726113    1980 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
I0416 18:09:43.735127    1980 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0416 18:09:43.752113    1980 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
I0416 18:09:43.793405    1980 start.go:296] duration metric: took 4.4172952s for postStartSetup
I0416 18:09:43.793405    1980 fix.go:56] duration metric: took 1m20.5711944s for fixHost
I0416 18:09:43.793405    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
I0416 18:09:45.728174    1980 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 18:09:45.728247    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:09:45.728269    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
I0416 18:09:48.080398    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139

                                                
                                                
I0416 18:09:48.080398    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:09:48.083919    1980 main.go:141] libmachine: Using SSH client type: native
I0416 18:09:48.084508    1980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.85.139 22 <nil> <nil>}
I0416 18:09:48.084508    1980 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0416 18:09:48.216241    1980 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713290988.388576117

                                                
                                                
I0416 18:09:48.216241    1980 fix.go:216] guest clock: 1713290988.388576117
I0416 18:09:48.216761    1980 fix.go:229] Guest: 2024-04-16 18:09:48.388576117 +0000 UTC Remote: 2024-04-16 18:09:43.7934058 +0000 UTC m=+82.674270401 (delta=4.595170317s)
I0416 18:09:48.216761    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
I0416 18:09:50.131857    1980 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 18:09:50.132896    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:09:50.132989    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
I0416 18:09:52.457955    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139

                                                
                                                
I0416 18:09:52.457955    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:09:52.462347    1980 main.go:141] libmachine: Using SSH client type: native
I0416 18:09:52.462347    1980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.85.139 22 <nil> <nil>}
I0416 18:09:52.462347    1980 main.go:141] libmachine: About to run SSH command:
sudo date -s @1713290988
I0416 18:09:52.614307    1980 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 18:09:48 UTC 2024

                                                
                                                
I0416 18:09:52.614307    1980 fix.go:236] clock set: Tue Apr 16 18:09:48 UTC 2024
(err=<nil>)
I0416 18:09:52.614307    1980 start.go:83] releasing machines lock for "multinode-945500-m03", held for 1m29.3915964s
I0416 18:09:52.614545    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
I0416 18:09:54.596003    1980 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 18:09:54.596003    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:09:54.596506    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
I0416 18:09:56.876127    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139

                                                
                                                
I0416 18:09:56.876127    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:09:56.878749    1980 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0416 18:09:56.879368    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
I0416 18:09:56.889809    1980 ssh_runner.go:195] Run: systemctl --version
I0416 18:09:56.889809    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
I0416 18:09:58.855356    1980 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 18:09:58.855356    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:09:58.855356    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
I0416 18:09:58.855787    1980 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 18:09:58.855787    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:09:58.856380    1980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
I0416 18:10:01.233174    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139

                                                
                                                
I0416 18:10:01.233174    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:10:01.233174    1980 sshutil.go:53] new ssh client: &{IP:172.19.85.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m03\id_rsa Username:docker}
I0416 18:10:01.257976    1980 main.go:141] libmachine: [stdout =====>] : 172.19.85.139

                                                
                                                
I0416 18:10:01.257976    1980 main.go:141] libmachine: [stderr =====>] : 
I0416 18:10:01.257976    1980 sshutil.go:53] new ssh client: &{IP:172.19.85.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m03\id_rsa Username:docker}
I0416 18:10:01.411663    1980 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5321339s)
I0416 18:10:01.411663    1980 ssh_runner.go:235] Completed: systemctl --version: (4.5215973s)
I0416 18:10:01.421808    1980 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0416 18:10:01.430456    1980 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0416 18:10:01.440844    1980 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0416 18:10:01.469266    1980 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0416 18:10:01.469266    1980 start.go:494] detecting cgroup driver to use...
I0416 18:10:01.469266    1980 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0416 18:10:01.512612    1980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0416 18:10:01.541998    1980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0416 18:10:01.561417    1980 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0416 18:10:01.569953    1980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0416 18:10:01.599794    1980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0416 18:10:01.629537    1980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0416 18:10:01.667219    1980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0416 18:10:01.697555    1980 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0416 18:10:01.726755    1980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0416 18:10:01.756928    1980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0416 18:10:01.787589    1980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0416 18:10:01.816595    1980 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0416 18:10:01.845601    1980 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0416 18:10:01.873858    1980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0416 18:10:02.064377    1980 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0416 18:10:02.098365    1980 start.go:494] detecting cgroup driver to use...
I0416 18:10:02.109429    1980 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0416 18:10:02.145335    1980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0416 18:10:02.178542    1980 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0416 18:10:02.219019    1980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0416 18:10:02.252745    1980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0416 18:10:02.287708    1980 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0416 18:10:02.347622    1980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0416 18:10:02.372030    1980 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0416 18:10:02.416562    1980 ssh_runner.go:195] Run: which cri-dockerd
I0416 18:10:02.432701    1980 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0416 18:10:02.451957    1980 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0416 18:10:02.499759    1980 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0416 18:10:02.689385    1980 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0416 18:10:02.871997    1980 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0416 18:10:02.872193    1980 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0416 18:10:02.917213    1980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0416 18:10:03.123856    1980 ssh_runner.go:195] Run: sudo systemctl restart docker
I0416 18:11:04.260310    1980 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1329211s)
I0416 18:11:04.269487    1980 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
I0416 18:11:04.299900    1980 out.go:177] 
W0416 18:11:04.300897    1980 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.

                                                
                                                
sudo journalctl --no-pager -u docker:
-- stdout --
Apr 16 18:09:38 multinode-945500-m03 systemd[1]: Starting Docker Application Container Engine...
Apr 16 18:09:38 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:38.125979206Z" level=info msg="Starting up"
Apr 16 18:09:38 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:38.127492304Z" level=info msg="containerd not running, starting managed containerd"
Apr 16 18:09:38 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:38.128827678Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=655
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.165717401Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.194194024Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.194907917Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.195203556Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.195285467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.196243192Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.196276296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.196454219Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.196553932Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.196573035Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.196584136Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.196964786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.197581267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.200663570Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.200760682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.200901401Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.200992713Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.201519282Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.201622195Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.201637897Z" level=info msg="metadata content store policy set" policy=shared
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.203658161Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.203822583Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.203846386Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.203865388Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.203881090Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.203951099Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204538576Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204688996Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204804011Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204827814Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204844016Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204858318Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204872120Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204888422Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204904124Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204917626Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204931728Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204944529Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204971533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204987835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205003437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205021039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205034041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205047543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205059544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205181460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205199463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205214265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205230367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205242368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205254570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205282674Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205303676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205316178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205328179Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205371985Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205389888Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205402289Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205413191Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205468898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205682426Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205702128Z" level=info msg="NRI interface is disabled by configuration."
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.206192893Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.206329110Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.206371216Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.206402020Z" level=info msg="containerd successfully booted in 0.045005s"
Apr 16 18:09:39 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:39.174862304Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Apr 16 18:09:39 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:39.194379632Z" level=info msg="Loading containers: start."
Apr 16 18:09:39 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:39.405597882Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Apr 16 18:09:39 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:39.477348860Z" level=info msg="Loading containers: done."
Apr 16 18:09:39 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:39.496174541Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
Apr 16 18:09:39 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:39.496965741Z" level=info msg="Daemon has completed initialization"
Apr 16 18:09:39 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:39.545898932Z" level=info msg="API listen on /var/run/docker.sock"
Apr 16 18:09:39 multinode-945500-m03 systemd[1]: Started Docker Application Container Engine.
Apr 16 18:09:39 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:39.547974794Z" level=info msg="API listen on [::]:2376"
Apr 16 18:10:03 multinode-945500-m03 systemd[1]: Stopping Docker Application Container Engine...
Apr 16 18:10:03 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:10:03.322970590Z" level=info msg="Processing signal 'terminated'"
Apr 16 18:10:03 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:10:03.324448537Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Apr 16 18:10:03 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:10:03.324734446Z" level=info msg="Daemon shutdown complete"
Apr 16 18:10:03 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:10:03.324802648Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Apr 16 18:10:03 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:10:03.324893151Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Apr 16 18:10:04 multinode-945500-m03 systemd[1]: docker.service: Deactivated successfully.
Apr 16 18:10:04 multinode-945500-m03 systemd[1]: Stopped Docker Application Container Engine.
Apr 16 18:10:04 multinode-945500-m03 systemd[1]: Starting Docker Application Container Engine...
Apr 16 18:10:04 multinode-945500-m03 dockerd[1027]: time="2024-04-16T18:10:04.401537010Z" level=info msg="Starting up"
Apr 16 18:11:04 multinode-945500-m03 dockerd[1027]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 16 18:11:04 multinode-945500-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 16 18:11:04 multinode-945500-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 16 18:11:04 multinode-945500-m03 systemd[1]: Failed to start Docker Application Container Engine.

                                                
                                                
-- /stdout --
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.

                                                
                                                
sudo journalctl --no-pager -u docker:
-- stdout --
Apr 16 18:09:38 multinode-945500-m03 systemd[1]: Starting Docker Application Container Engine...
Apr 16 18:09:38 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:38.125979206Z" level=info msg="Starting up"
Apr 16 18:09:38 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:38.127492304Z" level=info msg="containerd not running, starting managed containerd"
Apr 16 18:09:38 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:38.128827678Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=655
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.165717401Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.194194024Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.194907917Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.195203556Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.195285467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.196243192Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.196276296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.196454219Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.196553932Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.196573035Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.196584136Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.196964786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.197581267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.200663570Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.200760682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.200901401Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.200992713Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.201519282Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.201622195Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.201637897Z" level=info msg="metadata content store policy set" policy=shared
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.203658161Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.203822583Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.203846386Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.203865388Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.203881090Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.203951099Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204538576Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204688996Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204804011Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204827814Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204844016Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204858318Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204872120Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204888422Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204904124Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204917626Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204931728Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204944529Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204971533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.204987835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205003437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205021039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205034041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205047543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205059544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205181460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205199463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205214265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205230367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205242368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205254570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205282674Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205303676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205316178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205328179Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205371985Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205389888Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205402289Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205413191Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205468898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205682426Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.205702128Z" level=info msg="NRI interface is disabled by configuration."
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.206192893Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.206329110Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.206371216Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Apr 16 18:09:38 multinode-945500-m03 dockerd[655]: time="2024-04-16T18:09:38.206402020Z" level=info msg="containerd successfully booted in 0.045005s"
Apr 16 18:09:39 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:39.174862304Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Apr 16 18:09:39 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:39.194379632Z" level=info msg="Loading containers: start."
Apr 16 18:09:39 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:39.405597882Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Apr 16 18:09:39 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:39.477348860Z" level=info msg="Loading containers: done."
Apr 16 18:09:39 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:39.496174541Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
Apr 16 18:09:39 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:39.496965741Z" level=info msg="Daemon has completed initialization"
Apr 16 18:09:39 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:39.545898932Z" level=info msg="API listen on /var/run/docker.sock"
Apr 16 18:09:39 multinode-945500-m03 systemd[1]: Started Docker Application Container Engine.
Apr 16 18:09:39 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:09:39.547974794Z" level=info msg="API listen on [::]:2376"
Apr 16 18:10:03 multinode-945500-m03 systemd[1]: Stopping Docker Application Container Engine...
Apr 16 18:10:03 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:10:03.322970590Z" level=info msg="Processing signal 'terminated'"
Apr 16 18:10:03 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:10:03.324448537Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Apr 16 18:10:03 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:10:03.324734446Z" level=info msg="Daemon shutdown complete"
Apr 16 18:10:03 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:10:03.324802648Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Apr 16 18:10:03 multinode-945500-m03 dockerd[649]: time="2024-04-16T18:10:03.324893151Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Apr 16 18:10:04 multinode-945500-m03 systemd[1]: docker.service: Deactivated successfully.
Apr 16 18:10:04 multinode-945500-m03 systemd[1]: Stopped Docker Application Container Engine.
Apr 16 18:10:04 multinode-945500-m03 systemd[1]: Starting Docker Application Container Engine...
Apr 16 18:10:04 multinode-945500-m03 dockerd[1027]: time="2024-04-16T18:10:04.401537010Z" level=info msg="Starting up"
Apr 16 18:11:04 multinode-945500-m03 dockerd[1027]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 16 18:11:04 multinode-945500-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 16 18:11:04 multinode-945500-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 16 18:11:04 multinode-945500-m03 systemd[1]: Failed to start Docker Application Container Engine.

                                                
                                                
-- /stdout --
W0416 18:11:04.300897    1980 out.go:239] * 
* 
W0416 18:11:04.312642    1980 out.go:239] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                      │
│    * If the above advice does not help, please let us know:                                                          │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
│                                                                                                                      │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
│    * Please also attach the following file to the GitHub issue:                                                      │
│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_node_d3371be9b91d3e65188b37d2edd0282838d23ad8_1.log    │
│                                                                                                                      │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                      │
│    * If the above advice does not help, please let us know:                                                          │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
│                                                                                                                      │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
│    * Please also attach the following file to the GitHub issue:                                                      │
│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_node_d3371be9b91d3e65188b37d2edd0282838d23ad8_1.log    │
│                                                                                                                      │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0416 18:11:04.313713    1980 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-windows-amd64.exe -p multinode-945500 node start m03 -v=7 --alsologtostderr": exit status 90
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-945500 status -v=7 --alsologtostderr
E0416 18:11:07.043768    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-945500 status -v=7 --alsologtostderr: exit status 2 (32.1760406s)

                                                
                                                
-- stdout --
	multinode-945500
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-945500-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-945500-m03
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 18:11:04.716480   12916 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0416 18:11:04.777471   12916 out.go:291] Setting OutFile to fd 644 ...
	I0416 18:11:04.778478   12916 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 18:11:04.778478   12916 out.go:304] Setting ErrFile to fd 940...
	I0416 18:11:04.778478   12916 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 18:11:04.791483   12916 out.go:298] Setting JSON to false
	I0416 18:11:04.791483   12916 mustload.go:65] Loading cluster: multinode-945500
	I0416 18:11:04.791483   12916 notify.go:220] Checking for updates...
	I0416 18:11:04.791483   12916 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:11:04.791483   12916 status.go:255] checking status of multinode-945500 ...
	I0416 18:11:04.792484   12916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:11:06.826787   12916 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:11:06.826787   12916 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:11:06.827288   12916 status.go:330] multinode-945500 host status = "Running" (err=<nil>)
	I0416 18:11:06.827288   12916 host.go:66] Checking if "multinode-945500" exists ...
	I0416 18:11:06.828161   12916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:11:08.836164   12916 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:11:08.836618   12916 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:11:08.836618   12916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:11:11.214685   12916 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 18:11:11.214685   12916 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:11:11.214897   12916 host.go:66] Checking if "multinode-945500" exists ...
	I0416 18:11:11.224025   12916 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 18:11:11.224025   12916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:11:13.212230   12916 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:11:13.212230   12916 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:11:13.212695   12916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:11:15.501140   12916 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 18:11:15.501140   12916 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:11:15.501610   12916 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 18:11:15.597033   12916 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.3727614s)
	I0416 18:11:15.606087   12916 ssh_runner.go:195] Run: systemctl --version
	I0416 18:11:15.632554   12916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 18:11:15.661750   12916 kubeconfig.go:125] found "multinode-945500" server: "https://172.19.91.227:8443"
	I0416 18:11:15.661812   12916 api_server.go:166] Checking apiserver status ...
	I0416 18:11:15.670383   12916 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 18:11:15.702113   12916 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2058/cgroup
	W0416 18:11:15.718790   12916 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2058/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 18:11:15.727625   12916 ssh_runner.go:195] Run: ls
	I0416 18:11:15.734202   12916 api_server.go:253] Checking apiserver healthz at https://172.19.91.227:8443/healthz ...
	I0416 18:11:15.742138   12916 api_server.go:279] https://172.19.91.227:8443/healthz returned 200:
	ok
	I0416 18:11:15.742666   12916 status.go:422] multinode-945500 apiserver status = Running (err=<nil>)
	I0416 18:11:15.742828   12916 status.go:257] multinode-945500 status: &{Name:multinode-945500 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 18:11:15.742888   12916 status.go:255] checking status of multinode-945500-m02 ...
	I0416 18:11:15.743510   12916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:11:17.671869   12916 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:11:17.672679   12916 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:11:17.672679   12916 status.go:330] multinode-945500-m02 host status = "Running" (err=<nil>)
	I0416 18:11:17.672679   12916 host.go:66] Checking if "multinode-945500-m02" exists ...
	I0416 18:11:17.673374   12916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:11:19.602373   12916 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:11:19.602511   12916 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:11:19.602588   12916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:11:21.893062   12916 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 18:11:21.893821   12916 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:11:21.893821   12916 host.go:66] Checking if "multinode-945500-m02" exists ...
	I0416 18:11:21.902993   12916 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 18:11:21.902993   12916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:11:23.790575   12916 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:11:23.790575   12916 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:11:23.791231   12916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:11:26.067391   12916 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 18:11:26.067610   12916 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:11:26.068207   12916 sshutil.go:53] new ssh client: &{IP:172.19.91.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\id_rsa Username:docker}
	I0416 18:11:26.164655   12916 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.2614211s)
	I0416 18:11:26.173260   12916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 18:11:26.198876   12916 status.go:257] multinode-945500-m02 status: &{Name:multinode-945500-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0416 18:11:26.198930   12916 status.go:255] checking status of multinode-945500-m03 ...
	I0416 18:11:26.199383   12916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:11:28.132630   12916 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:11:28.132630   12916 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:11:28.133319   12916 status.go:330] multinode-945500-m03 host status = "Running" (err=<nil>)
	I0416 18:11:28.133319   12916 host.go:66] Checking if "multinode-945500-m03" exists ...
	I0416 18:11:28.133859   12916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:11:30.050971   12916 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:11:30.051146   12916 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:11:30.051230   12916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:11:32.359487   12916 main.go:141] libmachine: [stdout =====>] : 172.19.85.139
	
	I0416 18:11:32.359487   12916 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:11:32.360080   12916 host.go:66] Checking if "multinode-945500-m03" exists ...
	I0416 18:11:32.369159   12916 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 18:11:32.369159   12916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:11:34.274370   12916 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:11:34.274370   12916 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:11:34.274370   12916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:11:36.610855   12916 main.go:141] libmachine: [stdout =====>] : 172.19.85.139
	
	I0416 18:11:36.610855   12916 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:11:36.612274   12916 sshutil.go:53] new ssh client: &{IP:172.19.85.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m03\id_rsa Username:docker}
	I0416 18:11:36.706036   12916 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.3365282s)
	I0416 18:11:36.719478   12916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 18:11:36.748126   12916 status.go:257] multinode-945500-m03 status: &{Name:multinode-945500-m03 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-945500 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-945500 status -v=7 --alsologtostderr: exit status 2 (32.0159298s)

                                                
                                                
-- stdout --
	multinode-945500
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-945500-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-945500-m03
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 18:11:38.360213    9048 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0416 18:11:38.416921    9048 out.go:291] Setting OutFile to fd 604 ...
	I0416 18:11:38.418030    9048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 18:11:38.418030    9048 out.go:304] Setting ErrFile to fd 816...
	I0416 18:11:38.418030    9048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 18:11:38.431773    9048 out.go:298] Setting JSON to false
	I0416 18:11:38.431842    9048 mustload.go:65] Loading cluster: multinode-945500
	I0416 18:11:38.431842    9048 notify.go:220] Checking for updates...
	I0416 18:11:38.432471    9048 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:11:38.432556    9048 status.go:255] checking status of multinode-945500 ...
	I0416 18:11:38.432893    9048 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:11:40.362259    9048 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:11:40.362405    9048 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:11:40.362481    9048 status.go:330] multinode-945500 host status = "Running" (err=<nil>)
	I0416 18:11:40.362481    9048 host.go:66] Checking if "multinode-945500" exists ...
	I0416 18:11:40.363166    9048 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:11:42.290171    9048 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:11:42.290171    9048 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:11:42.291059    9048 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:11:44.585483    9048 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 18:11:44.585997    9048 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:11:44.585997    9048 host.go:66] Checking if "multinode-945500" exists ...
	I0416 18:11:44.594665    9048 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 18:11:44.594665    9048 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:11:46.507537    9048 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:11:46.507537    9048 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:11:46.507898    9048 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:11:48.814191    9048 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 18:11:48.814650    9048 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:11:48.815108    9048 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 18:11:48.920279    9048 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.325369s)
	I0416 18:11:48.928693    9048 ssh_runner.go:195] Run: systemctl --version
	I0416 18:11:48.949253    9048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 18:11:48.972685    9048 kubeconfig.go:125] found "multinode-945500" server: "https://172.19.91.227:8443"
	I0416 18:11:48.972685    9048 api_server.go:166] Checking apiserver status ...
	I0416 18:11:48.981194    9048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 18:11:49.021595    9048 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2058/cgroup
	W0416 18:11:49.039650    9048 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2058/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 18:11:49.047781    9048 ssh_runner.go:195] Run: ls
	I0416 18:11:49.053572    9048 api_server.go:253] Checking apiserver healthz at https://172.19.91.227:8443/healthz ...
	I0416 18:11:49.059001    9048 api_server.go:279] https://172.19.91.227:8443/healthz returned 200:
	ok
	I0416 18:11:49.060168    9048 status.go:422] multinode-945500 apiserver status = Running (err=<nil>)
	I0416 18:11:49.060168    9048 status.go:257] multinode-945500 status: &{Name:multinode-945500 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 18:11:49.060217    9048 status.go:255] checking status of multinode-945500-m02 ...
	I0416 18:11:49.060749    9048 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:11:50.970510    9048 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:11:50.971392    9048 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:11:50.971392    9048 status.go:330] multinode-945500-m02 host status = "Running" (err=<nil>)
	I0416 18:11:50.971392    9048 host.go:66] Checking if "multinode-945500-m02" exists ...
	I0416 18:11:50.972154    9048 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:11:52.935214    9048 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:11:52.935214    9048 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:11:52.935392    9048 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:11:55.252877    9048 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 18:11:55.252877    9048 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:11:55.253032    9048 host.go:66] Checking if "multinode-945500-m02" exists ...
	I0416 18:11:55.262181    9048 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 18:11:55.262181    9048 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:11:57.162177    9048 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:11:57.162177    9048 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:11:57.162707    9048 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:11:59.485819    9048 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 18:11:59.486695    9048 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:11:59.486695    9048 sshutil.go:53] new ssh client: &{IP:172.19.91.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\id_rsa Username:docker}
	I0416 18:11:59.594443    9048 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.3320163s)
	I0416 18:11:59.603203    9048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 18:11:59.629631    9048 status.go:257] multinode-945500-m02 status: &{Name:multinode-945500-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0416 18:11:59.629631    9048 status.go:255] checking status of multinode-945500-m03 ...
	I0416 18:11:59.630652    9048 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:12:01.555230    9048 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:12:01.555230    9048 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:12:01.555230    9048 status.go:330] multinode-945500-m03 host status = "Running" (err=<nil>)
	I0416 18:12:01.555538    9048 host.go:66] Checking if "multinode-945500-m03" exists ...
	I0416 18:12:01.555666    9048 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:12:03.456857    9048 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:12:03.456857    9048 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:12:03.456857    9048 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:12:05.796274    9048 main.go:141] libmachine: [stdout =====>] : 172.19.85.139
	
	I0416 18:12:05.796274    9048 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:12:05.796716    9048 host.go:66] Checking if "multinode-945500-m03" exists ...
	I0416 18:12:05.804808    9048 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 18:12:05.804808    9048 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:12:07.754584    9048 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:12:07.754584    9048 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:12:07.755435    9048 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m03 ).networkadapters[0]).ipaddresses[0]
	I0416 18:12:10.111145    9048 main.go:141] libmachine: [stdout =====>] : 172.19.85.139
	
	I0416 18:12:10.111209    9048 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:12:10.111209    9048 sshutil.go:53] new ssh client: &{IP:172.19.85.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m03\id_rsa Username:docker}
	I0416 18:12:10.213444    9048 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.4082695s)
	I0416 18:12:10.222894    9048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 18:12:10.247869    9048 status.go:257] multinode-945500-m03 status: &{Name:multinode-945500-m03 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-945500 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-945500 -n multinode-945500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-945500 -n multinode-945500: (10.9999931s)
helpers_test.go:244: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-945500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-945500 logs -n 25: (7.4499718s)
helpers_test.go:252: TestMultiNode/serial/StartAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| delete  | -p mount-start-1-738600                           | mount-start-1-738600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:49 UTC | 16 Apr 24 17:50 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |                |                     |                     |
	| ssh     | mount-start-2-738600 ssh -- ls                    | mount-start-2-738600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:50 UTC | 16 Apr 24 17:50 UTC |
	|         | /minikube-host                                    |                      |                   |                |                     |                     |
	| stop    | -p mount-start-2-738600                           | mount-start-2-738600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:50 UTC | 16 Apr 24 17:50 UTC |
	| start   | -p mount-start-2-738600                           | mount-start-2-738600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:50 UTC |                     |
	| delete  | -p mount-start-2-738600                           | mount-start-2-738600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:53 UTC | 16 Apr 24 17:54 UTC |
	| delete  | -p mount-start-1-738600                           | mount-start-1-738600 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:54 UTC | 16 Apr 24 17:54 UTC |
	| start   | -p multinode-945500                               | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 17:54 UTC | 16 Apr 24 18:00 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |                |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |                |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |                |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- apply -f                   | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- rollout                    | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | status deployment/busybox                         |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- get pods -o                | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- get pods -o                | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-jxvx2 --                       |                      |                   |                |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-ns8nx --                       |                      |                   |                |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-jxvx2 --                       |                      |                   |                |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-ns8nx --                       |                      |                   |                |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-jxvx2 -- nslookup              |                      |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-ns8nx -- nslookup              |                      |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- get pods -o                | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-jxvx2                          |                      |                   |                |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |                |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC |                     |
	|         | busybox-7fdf7869d9-jxvx2 -- sh                    |                      |                   |                |                     |                     |
	|         | -c ping -c 1 172.19.80.1                          |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-ns8nx                          |                      |                   |                |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |                |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC |                     |
	|         | busybox-7fdf7869d9-ns8nx -- sh                    |                      |                   |                |                     |                     |
	|         | -c ping -c 1 172.19.80.1                          |                      |                   |                |                     |                     |
	| node    | add -p multinode-945500 -v 3                      | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:02 UTC |                     |
	|         | --alsologtostderr                                 |                      |                   |                |                     |                     |
	| node    | multinode-945500 node stop m03                    | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:07 UTC | 16 Apr 24 18:07 UTC |
	| node    | multinode-945500 node start                       | multinode-945500     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:08 UTC |                     |
	|         | m03 -v=7 --alsologtostderr                        |                      |                   |                |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 17:54:38
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 17:54:38.458993    6988 out.go:291] Setting OutFile to fd 960 ...
	I0416 17:54:38.459581    6988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:54:38.459581    6988 out.go:304] Setting ErrFile to fd 676...
	I0416 17:54:38.459678    6988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:54:38.483191    6988 out.go:298] Setting JSON to false
	I0416 17:54:38.487192    6988 start.go:129] hostinfo: {"hostname":"minikube5","uptime":27708,"bootTime":1713262370,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0416 17:54:38.487192    6988 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0416 17:54:38.488186    6988 out.go:177] * [multinode-945500] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0416 17:54:38.489188    6988 notify.go:220] Checking for updates...
	I0416 17:54:38.489188    6988 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 17:54:38.490185    6988 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 17:54:38.490185    6988 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0416 17:54:38.491184    6988 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 17:54:38.491184    6988 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 17:54:38.493214    6988 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 17:54:43.355603    6988 out.go:177] * Using the hyperv driver based on user configuration
	I0416 17:54:43.356197    6988 start.go:297] selected driver: hyperv
	I0416 17:54:43.356197    6988 start.go:901] validating driver "hyperv" against <nil>
	I0416 17:54:43.356273    6988 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 17:54:43.396166    6988 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 17:54:43.397176    6988 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 17:54:43.397504    6988 cni.go:84] Creating CNI manager for ""
	I0416 17:54:43.397537    6988 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0416 17:54:43.397537    6988 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0416 17:54:43.397711    6988 start.go:340] cluster config:
	{Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:54:43.397711    6988 iso.go:125] acquiring lock: {Name:mka3f8eef32f5becd06d05d1d837c2a92a8fa70c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:54:43.399183    6988 out.go:177] * Starting "multinode-945500" primary control-plane node in "multinode-945500" cluster
	I0416 17:54:43.399538    6988 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 17:54:43.399538    6988 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0416 17:54:43.399538    6988 cache.go:56] Caching tarball of preloaded images
	I0416 17:54:43.399538    6988 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 17:54:43.400205    6988 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 17:54:43.400795    6988 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 17:54:43.401059    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json: {Name:mk67f15eab35e69a3277eb33417238e6d320045f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:54:43.401506    6988 start.go:360] acquireMachinesLock for multinode-945500: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 17:54:43.402049    6988 start.go:364] duration metric: took 542.9µs to acquireMachinesLock for "multinode-945500"
	I0416 17:54:43.402113    6988 start.go:93] Provisioning new machine with config: &{Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 17:54:43.402113    6988 start.go:125] createHost starting for "" (driver="hyperv")
	I0416 17:54:43.403221    6988 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 17:54:43.403542    6988 start.go:159] libmachine.API.Create for "multinode-945500" (driver="hyperv")
	I0416 17:54:43.403595    6988 client.go:168] LocalClient.Create starting
	I0416 17:54:43.404086    6988 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0416 17:54:43.404276    6988 main.go:141] libmachine: Decoding PEM data...
	I0416 17:54:43.404276    6988 main.go:141] libmachine: Parsing certificate...
	I0416 17:54:43.404276    6988 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0416 17:54:43.404276    6988 main.go:141] libmachine: Decoding PEM data...
	I0416 17:54:43.404276    6988 main.go:141] libmachine: Parsing certificate...
	I0416 17:54:43.404276    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0416 17:54:45.288246    6988 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0416 17:54:45.288342    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:54:45.288493    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0416 17:54:46.922912    6988 main.go:141] libmachine: [stdout =====>] : False
	
	I0416 17:54:46.922912    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:54:46.923010    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 17:54:48.270889    6988 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 17:54:48.270889    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:54:48.270889    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 17:54:51.466825    6988 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 17:54:51.466825    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:54:51.468671    6988 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 17:54:51.806641    6988 main.go:141] libmachine: Creating SSH key...
	I0416 17:54:52.035351    6988 main.go:141] libmachine: Creating VM...
	I0416 17:54:52.036345    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 17:54:54.656446    6988 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 17:54:54.656494    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:54:54.656633    6988 main.go:141] libmachine: Using switch "Default Switch"
	I0416 17:54:54.656633    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 17:54:56.229378    6988 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 17:54:56.229607    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:54:56.229607    6988 main.go:141] libmachine: Creating VHD
	I0416 17:54:56.229607    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0416 17:54:59.733727    6988 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5A486C23-0EFD-43D1-8BEB-4A60ACE1DF98
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0416 17:54:59.733800    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:54:59.733873    6988 main.go:141] libmachine: Writing magic tar header
	I0416 17:54:59.733915    6988 main.go:141] libmachine: Writing SSH key tar header
	I0416 17:54:59.741031    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0416 17:55:02.758991    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:02.758991    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:02.759271    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\disk.vhd' -SizeBytes 20000MB
	I0416 17:55:05.056217    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:05.056217    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:05.057316    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-945500 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0416 17:55:08.311574    6988 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-945500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0416 17:55:08.311574    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:08.311863    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-945500 -DynamicMemoryEnabled $false
	I0416 17:55:10.388584    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:10.389586    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:10.389586    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-945500 -Count 2
	I0416 17:55:12.413711    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:12.413711    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:12.414332    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-945500 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\boot2docker.iso'
	I0416 17:55:14.741711    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:14.741711    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:14.741711    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-945500 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\disk.vhd'
	I0416 17:55:17.003645    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:17.003645    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:17.003645    6988 main.go:141] libmachine: Starting VM...
	I0416 17:55:17.003645    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-945500
	I0416 17:55:19.573472    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:19.573700    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:19.573700    6988 main.go:141] libmachine: Waiting for host to start...
	I0416 17:55:19.573790    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:21.624051    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:21.624051    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:21.624771    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:55:23.884692    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:23.884692    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:24.892318    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:26.899190    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:26.899190    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:26.899348    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:55:29.176655    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:29.176655    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:30.177215    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:32.143102    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:32.143102    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:32.143464    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:55:34.404986    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:34.405261    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:35.419315    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:37.438553    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:37.438958    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:37.438958    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:55:39.692795    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:55:39.692795    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:40.700997    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:42.744138    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:42.744982    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:42.745064    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:55:45.083348    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:55:45.083348    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:45.083448    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:47.049900    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:47.050444    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:47.050523    6988 machine.go:94] provisionDockerMachine start ...
	I0416 17:55:47.050566    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:49.000414    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:49.000414    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:49.000537    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:55:51.284377    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:55:51.285296    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:51.290721    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:55:51.303784    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.227 22 <nil> <nil>}
	I0416 17:55:51.303784    6988 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 17:55:51.430251    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 17:55:51.430320    6988 buildroot.go:166] provisioning hostname "multinode-945500"
	I0416 17:55:51.430320    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:53.414239    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:53.414239    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:53.414512    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:55:55.729573    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:55:55.729573    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:55.733714    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:55:55.734245    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.227 22 <nil> <nil>}
	I0416 17:55:55.734245    6988 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-945500 && echo "multinode-945500" | sudo tee /etc/hostname
	I0416 17:55:55.888906    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-945500
	
	I0416 17:55:55.888975    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:55:57.782302    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:55:57.782302    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:55:57.782786    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:00.073834    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:00.073834    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:00.078560    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:56:00.078657    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.227 22 <nil> <nil>}
	I0416 17:56:00.078657    6988 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-945500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-945500/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-945500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 17:56:00.230030    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 17:56:00.230079    6988 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 17:56:00.230079    6988 buildroot.go:174] setting up certificates
	I0416 17:56:00.230079    6988 provision.go:84] configureAuth start
	I0416 17:56:00.230182    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:02.147449    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:02.147449    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:02.147449    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:04.449327    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:04.450388    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:04.450388    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:06.443860    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:06.443860    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:06.444760    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:08.814817    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:08.814817    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:08.814817    6988 provision.go:143] copyHostCerts
	I0416 17:56:08.815787    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 17:56:08.816004    6988 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 17:56:08.816004    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 17:56:08.816371    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 17:56:08.817376    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 17:56:08.817582    6988 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 17:56:08.817582    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 17:56:08.817582    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 17:56:08.818480    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 17:56:08.818480    6988 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 17:56:08.818480    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 17:56:08.819278    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 17:56:08.820184    6988 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-945500 san=[127.0.0.1 172.19.91.227 localhost minikube multinode-945500]
	I0416 17:56:09.120922    6988 provision.go:177] copyRemoteCerts
	I0416 17:56:09.129891    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 17:56:09.129891    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:11.105788    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:11.105788    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:11.105788    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:13.452243    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:13.452243    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:13.452604    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 17:56:13.553822    6988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.42368s)
	I0416 17:56:13.553822    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 17:56:13.553822    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 17:56:13.595187    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 17:56:13.595187    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0416 17:56:13.635052    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 17:56:13.635528    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 17:56:13.675952    6988 provision.go:87] duration metric: took 13.4440865s to configureAuth
	I0416 17:56:13.676049    6988 buildroot.go:189] setting minikube options for container-runtime
	I0416 17:56:13.676421    6988 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 17:56:13.676504    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:15.610838    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:15.610926    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:15.610926    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:17.912484    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:17.913491    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:17.916946    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:56:17.917531    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.227 22 <nil> <nil>}
	I0416 17:56:17.917531    6988 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 17:56:18.061063    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 17:56:18.061063    6988 buildroot.go:70] root file system type: tmpfs
	I0416 17:56:18.061690    6988 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 17:56:18.061690    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:20.049603    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:20.049603    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:20.049978    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:22.383521    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:22.383521    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:22.387896    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:56:22.388601    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.227 22 <nil> <nil>}
	I0416 17:56:22.388601    6988 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 17:56:22.561164    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 17:56:22.561269    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:24.443674    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:24.444091    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:24.444193    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:26.758959    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:26.758959    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:26.765429    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:56:26.765429    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.227 22 <nil> <nil>}
	I0416 17:56:26.765957    6988 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 17:56:28.704221    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 17:56:28.704221    6988 machine.go:97] duration metric: took 41.6513356s to provisionDockerMachine
	I0416 17:56:28.704317    6988 client.go:171] duration metric: took 1m45.2947032s to LocalClient.Create
	I0416 17:56:28.704398    6988 start.go:167] duration metric: took 1m45.2948041s to libmachine.API.Create "multinode-945500"
	I0416 17:56:28.704398    6988 start.go:293] postStartSetup for "multinode-945500" (driver="hyperv")
	I0416 17:56:28.704489    6988 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 17:56:28.714148    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 17:56:28.714148    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:30.638973    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:30.638973    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:30.639089    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:32.961564    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:32.961564    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:32.961564    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 17:56:33.069322    6988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3549265s)
	I0416 17:56:33.078710    6988 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 17:56:33.085331    6988 command_runner.go:130] > NAME=Buildroot
	I0416 17:56:33.085331    6988 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0416 17:56:33.085331    6988 command_runner.go:130] > ID=buildroot
	I0416 17:56:33.085331    6988 command_runner.go:130] > VERSION_ID=2023.02.9
	I0416 17:56:33.085331    6988 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0416 17:56:33.086070    6988 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 17:56:33.086171    6988 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 17:56:33.086945    6988 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 17:56:33.088129    6988 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 17:56:33.088129    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 17:56:33.106615    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 17:56:33.129263    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 17:56:33.174677    6988 start.go:296] duration metric: took 4.469934s for postStartSetup
	I0416 17:56:33.177364    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:35.133709    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:35.133709    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:35.133796    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:37.452577    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:37.452577    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:37.453529    6988 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 17:56:37.455914    6988 start.go:128] duration metric: took 1m54.0472303s to createHost
	I0416 17:56:37.455914    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:39.425449    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:39.425449    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:39.426011    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:41.744115    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:41.744115    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:41.748497    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:56:41.748631    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.227 22 <nil> <nil>}
	I0416 17:56:41.748631    6988 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 17:56:41.875115    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713290202.039643702
	
	I0416 17:56:41.875272    6988 fix.go:216] guest clock: 1713290202.039643702
	I0416 17:56:41.875272    6988 fix.go:229] Guest: 2024-04-16 17:56:42.039643702 +0000 UTC Remote: 2024-04-16 17:56:37.4559145 +0000 UTC m=+119.121500601 (delta=4.583729202s)
	I0416 17:56:41.875399    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:43.872191    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:43.873117    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:43.873117    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:46.207797    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:46.207797    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:46.213575    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:56:46.213575    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.227 22 <nil> <nil>}
	I0416 17:56:46.213575    6988 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713290201
	I0416 17:56:46.370971    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 17:56:41 UTC 2024
	
	I0416 17:56:46.370971    6988 fix.go:236] clock set: Tue Apr 16 17:56:41 UTC 2024
	 (err=<nil>)
	I0416 17:56:46.371058    6988 start.go:83] releasing machines lock for "multinode-945500", held for 2m2.9620339s
	I0416 17:56:46.371284    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:48.308157    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:48.308984    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:48.309041    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:50.575031    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:50.575031    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:50.579218    6988 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 17:56:50.579218    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:50.586441    6988 ssh_runner.go:195] Run: cat /version.json
	I0416 17:56:50.586979    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:56:52.634472    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:52.634472    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:52.634472    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:52.639621    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:56:52.639621    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:52.639621    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:56:55.047917    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:55.048488    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:55.048917    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 17:56:55.065759    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:56:55.066462    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:56:55.066602    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 17:56:55.354145    6988 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0416 17:56:55.354145    6988 command_runner.go:130] > {"iso_version": "v1.33.0-1713236417-18649", "kicbase_version": "v0.0.43-1713215244-18647", "minikube_version": "v1.33.0-beta.0", "commit": "4ec1a3e88a9f3ffb3930e555284d907468ae83a6"}
	I0416 17:56:55.354145    6988 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.7746557s)
	I0416 17:56:55.354145    6988 ssh_runner.go:235] Completed: cat /version.json: (4.7668953s)
	I0416 17:56:55.366453    6988 ssh_runner.go:195] Run: systemctl --version
	I0416 17:56:55.375220    6988 command_runner.go:130] > systemd 252 (252)
	I0416 17:56:55.375220    6988 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0416 17:56:55.384285    6988 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0416 17:56:55.392020    6988 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0416 17:56:55.392567    6988 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 17:56:55.401209    6988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 17:56:55.426637    6988 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0416 17:56:55.427403    6988 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 17:56:55.427503    6988 start.go:494] detecting cgroup driver to use...
	I0416 17:56:55.427534    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 17:56:55.457110    6988 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0416 17:56:55.470104    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 17:56:55.494070    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 17:56:55.511268    6988 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 17:56:55.523954    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 17:56:55.549161    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 17:56:55.576216    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 17:56:55.602400    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 17:56:55.630572    6988 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 17:56:55.656816    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 17:56:55.683825    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 17:56:55.710767    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 17:56:55.737864    6988 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 17:56:55.753678    6988 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0416 17:56:55.761926    6988 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 17:56:55.794919    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:56:55.964839    6988 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 17:56:55.993258    6988 start.go:494] detecting cgroup driver to use...
	I0416 17:56:56.002807    6988 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 17:56:56.020460    6988 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0416 17:56:56.020914    6988 command_runner.go:130] > [Unit]
	I0416 17:56:56.020998    6988 command_runner.go:130] > Description=Docker Application Container Engine
	I0416 17:56:56.020998    6988 command_runner.go:130] > Documentation=https://docs.docker.com
	I0416 17:56:56.020998    6988 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0416 17:56:56.020998    6988 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0416 17:56:56.021071    6988 command_runner.go:130] > StartLimitBurst=3
	I0416 17:56:56.021071    6988 command_runner.go:130] > StartLimitIntervalSec=60
	I0416 17:56:56.021071    6988 command_runner.go:130] > [Service]
	I0416 17:56:56.021071    6988 command_runner.go:130] > Type=notify
	I0416 17:56:56.021071    6988 command_runner.go:130] > Restart=on-failure
	I0416 17:56:56.021071    6988 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0416 17:56:56.021156    6988 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0416 17:56:56.021156    6988 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0416 17:56:56.021156    6988 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0416 17:56:56.021241    6988 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0416 17:56:56.021281    6988 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0416 17:56:56.021354    6988 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0416 17:56:56.021427    6988 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0416 17:56:56.021427    6988 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0416 17:56:56.021427    6988 command_runner.go:130] > ExecStart=
	I0416 17:56:56.021508    6988 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0416 17:56:56.021508    6988 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0416 17:56:56.021586    6988 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0416 17:56:56.021586    6988 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0416 17:56:56.021663    6988 command_runner.go:130] > LimitNOFILE=infinity
	I0416 17:56:56.021663    6988 command_runner.go:130] > LimitNPROC=infinity
	I0416 17:56:56.021663    6988 command_runner.go:130] > LimitCORE=infinity
	I0416 17:56:56.021663    6988 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0416 17:56:56.021663    6988 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0416 17:56:56.021738    6988 command_runner.go:130] > TasksMax=infinity
	I0416 17:56:56.021738    6988 command_runner.go:130] > TimeoutStartSec=0
	I0416 17:56:56.021738    6988 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0416 17:56:56.021738    6988 command_runner.go:130] > Delegate=yes
	I0416 17:56:56.021738    6988 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0416 17:56:56.021811    6988 command_runner.go:130] > KillMode=process
	I0416 17:56:56.021811    6988 command_runner.go:130] > [Install]
	I0416 17:56:56.021811    6988 command_runner.go:130] > WantedBy=multi-user.target
	I0416 17:56:56.032694    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 17:56:56.060059    6988 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 17:56:56.101716    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 17:56:56.131287    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 17:56:56.163190    6988 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 17:56:56.210983    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 17:56:56.231971    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 17:56:56.261397    6988 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0416 17:56:56.272666    6988 ssh_runner.go:195] Run: which cri-dockerd
	I0416 17:56:56.276995    6988 command_runner.go:130] > /usr/bin/cri-dockerd
	I0416 17:56:56.286591    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 17:56:56.299870    6988 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 17:56:56.337571    6988 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 17:56:56.500406    6988 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 17:56:56.646617    6988 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 17:56:56.646617    6988 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 17:56:56.690996    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:56:56.871261    6988 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 17:56:59.295937    6988 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4242935s)
	I0416 17:56:59.304599    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0416 17:56:59.333610    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 17:56:59.361657    6988 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0416 17:56:59.541548    6988 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0416 17:56:59.705672    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:56:59.866404    6988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0416 17:56:59.907640    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 17:56:59.939748    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:57:00.107406    6988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0416 17:57:00.200852    6988 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0416 17:57:00.212214    6988 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0416 17:57:00.220777    6988 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0416 17:57:00.220777    6988 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0416 17:57:00.220777    6988 command_runner.go:130] > Device: 0,22	Inode: 885         Links: 1
	I0416 17:57:00.220777    6988 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0416 17:57:00.220777    6988 command_runner.go:130] > Access: 2024-04-16 17:57:00.296362377 +0000
	I0416 17:57:00.220777    6988 command_runner.go:130] > Modify: 2024-04-16 17:57:00.296362377 +0000
	I0416 17:57:00.220777    6988 command_runner.go:130] > Change: 2024-04-16 17:57:00.300362562 +0000
	I0416 17:57:00.220777    6988 command_runner.go:130] >  Birth: -
	I0416 17:57:00.220777    6988 start.go:562] Will wait 60s for crictl version
	I0416 17:57:00.230775    6988 ssh_runner.go:195] Run: which crictl
	I0416 17:57:00.235786    6988 command_runner.go:130] > /usr/bin/crictl
	I0416 17:57:00.245023    6988 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 17:57:00.292622    6988 command_runner.go:130] > Version:  0.1.0
	I0416 17:57:00.292622    6988 command_runner.go:130] > RuntimeName:  docker
	I0416 17:57:00.292622    6988 command_runner.go:130] > RuntimeVersion:  26.0.1
	I0416 17:57:00.292739    6988 command_runner.go:130] > RuntimeApiVersion:  v1
	I0416 17:57:00.292794    6988 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0416 17:57:00.301388    6988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 17:57:00.331067    6988 command_runner.go:130] > 26.0.1
	I0416 17:57:00.337439    6988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 17:57:00.365025    6988 command_runner.go:130] > 26.0.1
	I0416 17:57:00.367212    6988 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
	I0416 17:57:00.367413    6988 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0416 17:57:00.371515    6988 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0416 17:57:00.371597    6988 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0416 17:57:00.371597    6988 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0416 17:57:00.371597    6988 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:6f:a4 Flags:up|broadcast|multicast|running}
	I0416 17:57:00.374158    6988 ip.go:210] interface addr: fe80::6b96:eca7:5afa:def5/64
	I0416 17:57:00.374158    6988 ip.go:210] interface addr: 172.19.80.1/20
	I0416 17:57:00.380883    6988 ssh_runner.go:195] Run: grep 172.19.80.1	host.minikube.internal$ /etc/hosts
	I0416 17:57:00.386921    6988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 17:57:00.407839    6988 kubeadm.go:877] updating cluster {Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.29.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.91.227 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 17:57:00.407839    6988 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 17:57:00.416191    6988 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 17:57:00.437198    6988 docker.go:685] Got preloaded images: 
	I0416 17:57:00.437198    6988 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0416 17:57:00.446472    6988 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0416 17:57:00.461564    6988 command_runner.go:139] > {"Repositories":{}}
	I0416 17:57:00.472373    6988 ssh_runner.go:195] Run: which lz4
	I0416 17:57:00.477412    6988 command_runner.go:130] > /usr/bin/lz4
	I0416 17:57:00.477412    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0416 17:57:00.487276    6988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 17:57:00.492861    6988 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 17:57:00.493543    6988 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 17:57:00.493600    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0416 17:57:01.970587    6988 docker.go:649] duration metric: took 1.4924844s to copy over tarball
	I0416 17:57:01.979028    6988 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 17:57:10.810575    6988 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.831045s)
	I0416 17:57:10.810689    6988 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 17:57:10.875450    6988 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0416 17:57:10.895935    6988 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.29.3":"sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533","registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c":"sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.29.3":"sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3","registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104":"sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.29.3":"sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392","registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863":"sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b
5bbe4f71784e392"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.29.3":"sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b","registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a":"sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0416 17:57:10.895935    6988 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0416 17:57:10.938742    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:57:11.136149    6988 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 17:57:13.733531    6988 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5972349s)
	I0416 17:57:13.742898    6988 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 17:57:13.765918    6988 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.29.3
	I0416 17:57:13.765918    6988 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.29.3
	I0416 17:57:13.765918    6988 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.29.3
	I0416 17:57:13.765918    6988 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.29.3
	I0416 17:57:13.765918    6988 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0416 17:57:13.765918    6988 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0416 17:57:13.765918    6988 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0416 17:57:13.765918    6988 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:57:13.765918    6988 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0416 17:57:13.765918    6988 cache_images.go:84] Images are preloaded, skipping loading
	I0416 17:57:13.765918    6988 kubeadm.go:928] updating node { 172.19.91.227 8443 v1.29.3 docker true true} ...
	I0416 17:57:13.766906    6988 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-945500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.91.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 17:57:13.774901    6988 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0416 17:57:13.804585    6988 command_runner.go:130] > cgroupfs
	I0416 17:57:13.804682    6988 cni.go:84] Creating CNI manager for ""
	I0416 17:57:13.804682    6988 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0416 17:57:13.804682    6988 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 17:57:13.804682    6988 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.91.227 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-945500 NodeName:multinode-945500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.91.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.91.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 17:57:13.804682    6988 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.91.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-945500"
	  kubeletExtraArgs:
	    node-ip: 172.19.91.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.91.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 17:57:13.813761    6988 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 17:57:13.830081    6988 command_runner.go:130] > kubeadm
	I0416 17:57:13.830081    6988 command_runner.go:130] > kubectl
	I0416 17:57:13.830081    6988 command_runner.go:130] > kubelet
	I0416 17:57:13.830165    6988 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 17:57:13.838770    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 17:57:13.852826    6988 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0416 17:57:13.878799    6988 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 17:57:13.905862    6988 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0416 17:57:13.943017    6988 ssh_runner.go:195] Run: grep 172.19.91.227	control-plane.minikube.internal$ /etc/hosts
	I0416 17:57:13.949214    6988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.91.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 17:57:13.980273    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:57:14.153644    6988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 17:57:14.177658    6988 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500 for IP: 172.19.91.227
	I0416 17:57:14.178687    6988 certs.go:194] generating shared ca certs ...
	I0416 17:57:14.178687    6988 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:14.179455    6988 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0416 17:57:14.179902    6988 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0416 17:57:14.180190    6988 certs.go:256] generating profile certs ...
	I0416 17:57:14.180755    6988 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\client.key
	I0416 17:57:14.180755    6988 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\client.crt with IP's: []
	I0416 17:57:14.411174    6988 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\client.crt ...
	I0416 17:57:14.411174    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\client.crt: {Name:mkc0623b015c4c96d85b8b3b13eb2cc6d3ac8763 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:14.412171    6988 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\client.key ...
	I0416 17:57:14.412171    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\client.key: {Name:mkbd9c01c6892e02b0a8d9c434e98a742e87c2cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:14.413058    6988 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key.e3ea85af
	I0416 17:57:14.414154    6988 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt.e3ea85af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.91.227]
	I0416 17:57:14.575473    6988 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt.e3ea85af ...
	I0416 17:57:14.575473    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt.e3ea85af: {Name:mk62c37573433811afa986b89a237b6c7bb0d1df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:14.576358    6988 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key.e3ea85af ...
	I0416 17:57:14.576358    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key.e3ea85af: {Name:mk6c23ff826064c327d5a977affe1877b10d9b72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:14.577574    6988 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt.e3ea85af -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt
	I0416 17:57:14.590486    6988 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key.e3ea85af -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key
	I0416 17:57:14.590795    6988 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.key
	I0416 17:57:14.590795    6988 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.crt with IP's: []
	I0416 17:57:14.794779    6988 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.crt ...
	I0416 17:57:14.795779    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.crt: {Name:mk40c9063a89a73b56bd4ccd89e15d6559ba1e37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:14.796782    6988 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.key ...
	I0416 17:57:14.796782    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.key: {Name:mk5e95084b6a4adeb7806da3f2d851d8919dced5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:14.798528    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 17:57:14.798760    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0416 17:57:14.799041    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 17:57:14.799237    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 17:57:14.799423    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0416 17:57:14.799630    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0416 17:57:14.799827    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0416 17:57:14.806003    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0416 17:57:14.809977    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem (1338 bytes)
	W0416 17:57:14.809977    6988 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324_empty.pem, impossibly tiny 0 bytes
	I0416 17:57:14.809977    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0416 17:57:14.809977    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0416 17:57:14.811027    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0416 17:57:14.811027    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0416 17:57:14.811027    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem (1708 bytes)
	I0416 17:57:14.811551    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /usr/share/ca-certificates/53242.pem
	I0416 17:57:14.811650    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:57:14.811737    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem -> /usr/share/ca-certificates/5324.pem
	I0416 17:57:14.812935    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 17:57:14.852949    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 17:57:14.891959    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 17:57:14.931152    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 17:57:14.968412    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0416 17:57:15.008983    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 17:57:15.048515    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 17:57:15.089091    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 17:57:15.125356    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /usr/share/ca-certificates/53242.pem (1708 bytes)
	I0416 17:57:15.162621    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 17:57:15.205246    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem --> /usr/share/ca-certificates/5324.pem (1338 bytes)
	I0416 17:57:15.248985    6988 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 17:57:15.289002    6988 ssh_runner.go:195] Run: openssl version
	I0416 17:57:15.296351    6988 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0416 17:57:15.308333    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 17:57:15.335334    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:57:15.341349    6988 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 16 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:57:15.342189    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:57:15.351026    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:57:15.358591    6988 command_runner.go:130] > b5213941
	I0416 17:57:15.367034    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 17:57:15.391467    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5324.pem && ln -fs /usr/share/ca-certificates/5324.pem /etc/ssl/certs/5324.pem"
	I0416 17:57:15.416387    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5324.pem
	I0416 17:57:15.423831    6988 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 16 16:35 /usr/share/ca-certificates/5324.pem
	I0416 17:57:15.423957    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:35 /usr/share/ca-certificates/5324.pem
	I0416 17:57:15.434442    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5324.pem
	I0416 17:57:15.442459    6988 command_runner.go:130] > 51391683
	I0416 17:57:15.451530    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5324.pem /etc/ssl/certs/51391683.0"
	I0416 17:57:15.480393    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/53242.pem && ln -fs /usr/share/ca-certificates/53242.pem /etc/ssl/certs/53242.pem"
	I0416 17:57:15.509124    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/53242.pem
	I0416 17:57:15.515721    6988 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 16 16:35 /usr/share/ca-certificates/53242.pem
	I0416 17:57:15.515827    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:35 /usr/share/ca-certificates/53242.pem
	I0416 17:57:15.524021    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/53242.pem
	I0416 17:57:15.533694    6988 command_runner.go:130] > 3ec20f2e
	I0416 17:57:15.541647    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/53242.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 17:57:15.567570    6988 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 17:57:15.573415    6988 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 17:57:15.573840    6988 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 17:57:15.574281    6988 kubeadm.go:391] StartCluster: {Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
9.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.91.227 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:57:15.580506    6988 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0416 17:57:15.612292    6988 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0416 17:57:15.627466    6988 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0416 17:57:15.628097    6988 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0416 17:57:15.628097    6988 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0416 17:57:15.635032    6988 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 17:57:15.660479    6988 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 17:57:15.676695    6988 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0416 17:57:15.676792    6988 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0416 17:57:15.676792    6988 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0416 17:57:15.676855    6988 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 17:57:15.676918    6988 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 17:57:15.676973    6988 kubeadm.go:156] found existing configuration files:
	
	I0416 17:57:15.684985    6988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 17:57:15.700012    6988 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 17:57:15.700126    6988 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 17:57:15.708938    6988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 17:57:15.734829    6988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 17:57:15.747861    6988 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 17:57:15.748201    6988 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 17:57:15.756696    6988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 17:57:15.784559    6988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 17:57:15.804131    6988 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 17:57:15.804131    6988 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 17:57:15.815130    6988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 17:57:15.838118    6988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 17:57:15.854130    6988 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 17:57:15.854130    6988 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 17:57:15.862912    6988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 17:57:15.876128    6988 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 17:57:16.053541    6988 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 17:57:16.053541    6988 command_runner.go:130] > [init] Using Kubernetes version: v1.29.3
	I0416 17:57:16.053865    6988 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 17:57:16.053865    6988 command_runner.go:130] > [preflight] Running pre-flight checks
	I0416 17:57:16.200461    6988 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 17:57:16.200461    6988 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 17:57:16.200461    6988 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 17:57:16.200461    6988 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 17:57:16.200461    6988 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 17:57:16.200461    6988 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 17:57:16.451494    6988 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 17:57:16.452473    6988 out.go:204]   - Generating certificates and keys ...
	I0416 17:57:16.451494    6988 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 17:57:16.453479    6988 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 17:57:16.453479    6988 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0416 17:57:16.453479    6988 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0416 17:57:16.453479    6988 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 17:57:16.705308    6988 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0416 17:57:16.705409    6988 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0416 17:57:16.859312    6988 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0416 17:57:16.859312    6988 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0416 17:57:17.049120    6988 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0416 17:57:17.049237    6988 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0416 17:57:17.314616    6988 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0416 17:57:17.314728    6988 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0416 17:57:17.509835    6988 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0416 17:57:17.509835    6988 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0416 17:57:17.510247    6988 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-945500] and IPs [172.19.91.227 127.0.0.1 ::1]
	I0416 17:57:17.510247    6988 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-945500] and IPs [172.19.91.227 127.0.0.1 ::1]
	I0416 17:57:17.791919    6988 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0416 17:57:17.791919    6988 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0416 17:57:17.792356    6988 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-945500] and IPs [172.19.91.227 127.0.0.1 ::1]
	I0416 17:57:17.792356    6988 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-945500] and IPs [172.19.91.227 127.0.0.1 ::1]
	I0416 17:57:17.995022    6988 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0416 17:57:17.995106    6988 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0416 17:57:18.220639    6988 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0416 17:57:18.220729    6988 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0416 17:57:18.582174    6988 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0416 17:57:18.582274    6988 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0416 17:57:18.582480    6988 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 17:57:18.582554    6988 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 17:57:18.743963    6988 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 17:57:18.744564    6988 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 17:57:19.067769    6988 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 17:57:19.068120    6988 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 17:57:19.240331    6988 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 17:57:19.240672    6988 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 17:57:19.461195    6988 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 17:57:19.461195    6988 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 17:57:19.652943    6988 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 17:57:19.653442    6988 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 17:57:19.654516    6988 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 17:57:19.654516    6988 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 17:57:19.660559    6988 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 17:57:19.660559    6988 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 17:57:19.661534    6988 out.go:204]   - Booting up control plane ...
	I0416 17:57:19.661534    6988 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 17:57:19.661534    6988 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 17:57:19.662544    6988 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 17:57:19.662544    6988 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 17:57:19.663540    6988 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 17:57:19.663540    6988 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 17:57:19.684534    6988 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 17:57:19.685153    6988 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 17:57:19.687532    6988 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 17:57:19.687532    6988 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 17:57:19.687532    6988 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 17:57:19.687532    6988 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0416 17:57:19.860703    6988 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 17:57:19.860788    6988 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 17:57:26.366044    6988 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.507200 seconds
	I0416 17:57:26.366044    6988 command_runner.go:130] > [apiclient] All control plane components are healthy after 6.507200 seconds
	I0416 17:57:26.385213    6988 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 17:57:26.385213    6988 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 17:57:26.408456    6988 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 17:57:26.408456    6988 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 17:57:26.942416    6988 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0416 17:57:26.942416    6988 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 17:57:26.943198    6988 kubeadm.go:309] [mark-control-plane] Marking the node multinode-945500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 17:57:26.943369    6988 command_runner.go:130] > [mark-control-plane] Marking the node multinode-945500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 17:57:27.456093    6988 kubeadm.go:309] [bootstrap-token] Using token: v7bkxo.pzxgmh7iiytdovwq
	I0416 17:57:27.456235    6988 command_runner.go:130] > [bootstrap-token] Using token: v7bkxo.pzxgmh7iiytdovwq
	I0416 17:57:27.456953    6988 out.go:204]   - Configuring RBAC rules ...
	I0416 17:57:27.457407    6988 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 17:57:27.457407    6988 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 17:57:27.473244    6988 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 17:57:27.473244    6988 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 17:57:27.485961    6988 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 17:57:27.486019    6988 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 17:57:27.492510    6988 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 17:57:27.492510    6988 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 17:57:27.496129    6988 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 17:57:27.496129    6988 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 17:57:27.501092    6988 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 17:57:27.501753    6988 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 17:57:27.517045    6988 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 17:57:27.517045    6988 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 17:57:27.829288    6988 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 17:57:27.829833    6988 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0416 17:57:27.880030    6988 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 17:57:27.880030    6988 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0416 17:57:27.883021    6988 kubeadm.go:309] 
	I0416 17:57:27.883395    6988 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0416 17:57:27.883467    6988 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 17:57:27.883558    6988 kubeadm.go:309] 
	I0416 17:57:27.883809    6988 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 17:57:27.883809    6988 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0416 17:57:27.883877    6988 kubeadm.go:309] 
	I0416 17:57:27.883877    6988 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 17:57:27.883877    6988 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0416 17:57:27.883877    6988 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 17:57:27.883877    6988 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 17:57:27.883877    6988 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 17:57:27.883877    6988 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 17:57:27.883877    6988 kubeadm.go:309] 
	I0416 17:57:27.883877    6988 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0416 17:57:27.883877    6988 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 17:57:27.883877    6988 kubeadm.go:309] 
	I0416 17:57:27.884765    6988 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 17:57:27.884765    6988 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 17:57:27.884765    6988 kubeadm.go:309] 
	I0416 17:57:27.884765    6988 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 17:57:27.884765    6988 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0416 17:57:27.884765    6988 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 17:57:27.884765    6988 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 17:57:27.884765    6988 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 17:57:27.884765    6988 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 17:57:27.884765    6988 kubeadm.go:309] 
	I0416 17:57:27.884765    6988 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0416 17:57:27.884765    6988 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 17:57:27.885775    6988 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0416 17:57:27.885775    6988 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 17:57:27.885775    6988 kubeadm.go:309] 
	I0416 17:57:27.885775    6988 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token v7bkxo.pzxgmh7iiytdovwq \
	I0416 17:57:27.885775    6988 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token v7bkxo.pzxgmh7iiytdovwq \
	I0416 17:57:27.885775    6988 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c \
	I0416 17:57:27.885775    6988 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c \
	I0416 17:57:27.885775    6988 kubeadm.go:309] 	--control-plane 
	I0416 17:57:27.885775    6988 command_runner.go:130] > 	--control-plane 
	I0416 17:57:27.885775    6988 kubeadm.go:309] 
	I0416 17:57:27.886749    6988 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0416 17:57:27.886749    6988 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 17:57:27.886749    6988 kubeadm.go:309] 
	I0416 17:57:27.886749    6988 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token v7bkxo.pzxgmh7iiytdovwq \
	I0416 17:57:27.886749    6988 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token v7bkxo.pzxgmh7iiytdovwq \
	I0416 17:57:27.886749    6988 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c 
	I0416 17:57:27.886749    6988 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c 
	I0416 17:57:27.886749    6988 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 17:57:27.887747    6988 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 17:57:27.887747    6988 cni.go:84] Creating CNI manager for ""
	I0416 17:57:27.887747    6988 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0416 17:57:27.888782    6988 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0416 17:57:27.898776    6988 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0416 17:57:27.906367    6988 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0416 17:57:27.906367    6988 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0416 17:57:27.906446    6988 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0416 17:57:27.906446    6988 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0416 17:57:27.906446    6988 command_runner.go:130] > Access: 2024-04-16 17:55:43.845708000 +0000
	I0416 17:57:27.906446    6988 command_runner.go:130] > Modify: 2024-04-16 08:43:32.000000000 +0000
	I0416 17:57:27.906446    6988 command_runner.go:130] > Change: 2024-04-16 17:55:34.250000000 +0000
	I0416 17:57:27.906446    6988 command_runner.go:130] >  Birth: -
	I0416 17:57:27.906446    6988 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0416 17:57:27.906446    6988 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0416 17:57:27.988519    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0416 17:57:28.490851    6988 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0416 17:57:28.498847    6988 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0416 17:57:28.511858    6988 command_runner.go:130] > serviceaccount/kindnet created
	I0416 17:57:28.523843    6988 command_runner.go:130] > daemonset.apps/kindnet created
	I0416 17:57:28.526917    6988 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 17:57:28.536843    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:28.538723    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-945500 minikube.k8s.io/updated_at=2024_04_16T17_57_28_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4 minikube.k8s.io/name=multinode-945500 minikube.k8s.io/primary=true
	I0416 17:57:28.553542    6988 command_runner.go:130] > -16
	I0416 17:57:28.553542    6988 ops.go:34] apiserver oom_adj: -16
	I0416 17:57:28.663066    6988 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0416 17:57:28.672472    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:28.703696    6988 command_runner.go:130] > node/multinode-945500 labeled
	I0416 17:57:28.779726    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:29.176642    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:29.310699    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:29.688820    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:29.783095    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:30.180137    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:30.283623    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:30.677902    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:30.770542    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:31.173788    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:31.267177    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:31.681339    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:31.776737    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:32.179098    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:32.275419    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:32.685593    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:32.784034    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:33.184934    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:33.284755    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:33.689894    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:33.786322    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:34.177543    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:34.278089    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:34.688074    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:34.788843    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:35.176613    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:35.278146    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:35.690652    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:35.790109    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:36.185543    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:36.283203    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:36.685087    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:36.787681    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:37.183826    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:37.287103    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:37.686779    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:37.790505    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:38.186663    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:38.313330    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:38.690145    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:38.792194    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:39.188096    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:39.307296    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:39.673049    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:39.777746    6988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0416 17:57:40.175109    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:40.317376    6988 command_runner.go:130] > NAME      SECRETS   AGE
	I0416 17:57:40.317525    6988 command_runner.go:130] > default   0         0s
	I0416 17:57:40.317525    6988 kubeadm.go:1107] duration metric: took 11.7899387s to wait for elevateKubeSystemPrivileges
	W0416 17:57:40.317725    6988 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 17:57:40.317725    6988 kubeadm.go:393] duration metric: took 24.7420862s to StartCluster
	I0416 17:57:40.317841    6988 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:40.318068    6988 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 17:57:40.320080    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:40.321302    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0416 17:57:40.321470    6988 start.go:234] Will wait 6m0s for node &{Name: IP:172.19.91.227 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 17:57:40.321470    6988 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 17:57:40.321614    6988 addons.go:69] Setting storage-provisioner=true in profile "multinode-945500"
	I0416 17:57:40.321614    6988 addons.go:234] Setting addon storage-provisioner=true in "multinode-945500"
	I0416 17:57:40.321614    6988 host.go:66] Checking if "multinode-945500" exists ...
	I0416 17:57:40.321614    6988 addons.go:69] Setting default-storageclass=true in profile "multinode-945500"
	I0416 17:57:40.321614    6988 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-945500"
	I0416 17:57:40.321614    6988 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 17:57:40.322690    6988 out.go:177] * Verifying Kubernetes components...
	I0416 17:57:40.322606    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:57:40.322690    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:57:40.336146    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:57:40.543940    6988 command_runner.go:130] > apiVersion: v1
	I0416 17:57:40.544012    6988 command_runner.go:130] > data:
	I0416 17:57:40.544012    6988 command_runner.go:130] >   Corefile: |
	I0416 17:57:40.544012    6988 command_runner.go:130] >     .:53 {
	I0416 17:57:40.544012    6988 command_runner.go:130] >         errors
	I0416 17:57:40.544012    6988 command_runner.go:130] >         health {
	I0416 17:57:40.544088    6988 command_runner.go:130] >            lameduck 5s
	I0416 17:57:40.544088    6988 command_runner.go:130] >         }
	I0416 17:57:40.544088    6988 command_runner.go:130] >         ready
	I0416 17:57:40.544112    6988 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0416 17:57:40.544112    6988 command_runner.go:130] >            pods insecure
	I0416 17:57:40.544112    6988 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0416 17:57:40.544112    6988 command_runner.go:130] >            ttl 30
	I0416 17:57:40.544112    6988 command_runner.go:130] >         }
	I0416 17:57:40.544112    6988 command_runner.go:130] >         prometheus :9153
	I0416 17:57:40.544112    6988 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0416 17:57:40.544191    6988 command_runner.go:130] >            max_concurrent 1000
	I0416 17:57:40.544191    6988 command_runner.go:130] >         }
	I0416 17:57:40.544191    6988 command_runner.go:130] >         cache 30
	I0416 17:57:40.544191    6988 command_runner.go:130] >         loop
	I0416 17:57:40.544191    6988 command_runner.go:130] >         reload
	I0416 17:57:40.544191    6988 command_runner.go:130] >         loadbalance
	I0416 17:57:40.544191    6988 command_runner.go:130] >     }
	I0416 17:57:40.544191    6988 command_runner.go:130] > kind: ConfigMap
	I0416 17:57:40.544191    6988 command_runner.go:130] > metadata:
	I0416 17:57:40.544191    6988 command_runner.go:130] >   creationTimestamp: "2024-04-16T17:57:27Z"
	I0416 17:57:40.544191    6988 command_runner.go:130] >   name: coredns
	I0416 17:57:40.544191    6988 command_runner.go:130] >   namespace: kube-system
	I0416 17:57:40.544296    6988 command_runner.go:130] >   resourceVersion: "274"
	I0416 17:57:40.544296    6988 command_runner.go:130] >   uid: 8b9b71a6-9315-41d9-b055-6f10c4c901fd
	I0416 17:57:40.544483    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.80.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0416 17:57:40.652097    6988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 17:57:40.902041    6988 command_runner.go:130] > configmap/coredns replaced
	I0416 17:57:40.905269    6988 start.go:946] {"host.minikube.internal": 172.19.80.1} host record injected into CoreDNS's ConfigMap
	I0416 17:57:40.906408    6988 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 17:57:40.906594    6988 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 17:57:40.907054    6988 kapi.go:59] client config for multinode-945500: &rest.Config{Host:"https://172.19.91.227:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef16c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 17:57:40.907195    6988 kapi.go:59] client config for multinode-945500: &rest.Config{Host:"https://172.19.91.227:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef16c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 17:57:40.908042    6988 cert_rotation.go:137] Starting client certificate rotation controller
	I0416 17:57:40.908659    6988 node_ready.go:35] waiting up to 6m0s for node "multinode-945500" to be "Ready" ...
	I0416 17:57:40.908860    6988 round_trippers.go:463] GET https://172.19.91.227:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0416 17:57:40.908860    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:40.908860    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:40.908860    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:40.908860    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:40.908860    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:40.908955    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:40.908955    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:40.937154    6988 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0416 17:57:40.937516    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:40.937516    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:40.937516    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:40.937516    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:40.937516    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:41 GMT
	I0416 17:57:40.937516    6988 round_trippers.go:580]     Audit-Id: e2e8d91f-cc17-4b2b-a543-43ca22e7c70f
	I0416 17:57:40.937516    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:40.937792    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:40.938405    6988 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I0416 17:57:40.938543    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:40.938543    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:40.938543    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:40.938543    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:40.938543    6988 round_trippers.go:580]     Content-Length: 291
	I0416 17:57:40.938662    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:41 GMT
	I0416 17:57:40.938662    6988 round_trippers.go:580]     Audit-Id: 9f1849c0-96cc-4587-8702-5be0aa8b035b
	I0416 17:57:40.938662    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:40.938662    6988 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"474cfa32-79eb-4bf1-81ff-b938f83eaa0d","resourceVersion":"383","creationTimestamp":"2024-04-16T17:57:27Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0416 17:57:40.939508    6988 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"474cfa32-79eb-4bf1-81ff-b938f83eaa0d","resourceVersion":"383","creationTimestamp":"2024-04-16T17:57:27Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0416 17:57:40.939654    6988 round_trippers.go:463] PUT https://172.19.91.227:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0416 17:57:40.939709    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:40.939709    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:40.939709    6988 round_trippers.go:473]     Content-Type: application/json
	I0416 17:57:40.939709    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:40.954484    6988 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0416 17:57:40.954484    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:40.954484    6988 round_trippers.go:580]     Content-Length: 291
	I0416 17:57:40.954484    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:41 GMT
	I0416 17:57:40.954484    6988 round_trippers.go:580]     Audit-Id: 33fbc171-b87c-4a8b-8b71-fb72b829abb0
	I0416 17:57:40.954484    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:40.954484    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:40.954484    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:40.954484    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:40.954484    6988 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"474cfa32-79eb-4bf1-81ff-b938f83eaa0d","resourceVersion":"385","creationTimestamp":"2024-04-16T17:57:27Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0416 17:57:41.416463    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:41.416653    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:41.416463    6988 round_trippers.go:463] GET https://172.19.91.227:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0416 17:57:41.416653    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:41.416653    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:41.416653    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:41.416739    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:41.416886    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:41.420106    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:41.420495    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:41.420495    6988 round_trippers.go:580]     Audit-Id: 0ef8009e-dcde-4e08-b2eb-b21c97c9713b
	I0416 17:57:41.420495    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:41.420495    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:41.420495    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:41.420495    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:41.420495    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:41 GMT
	I0416 17:57:41.420873    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:41.420873    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:41.420970    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:41.420970    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:41.420970    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:41.420970    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:41.420970    6988 round_trippers.go:580]     Content-Length: 291
	I0416 17:57:41.420970    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:41 GMT
	I0416 17:57:41.420970    6988 round_trippers.go:580]     Audit-Id: 876a0092-4e47-429b-acd8-759d477820ca
	I0416 17:57:41.421083    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:41.421155    6988 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"474cfa32-79eb-4bf1-81ff-b938f83eaa0d","resourceVersion":"395","creationTimestamp":"2024-04-16T17:57:27Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0416 17:57:41.421374    6988 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-945500" context rescaled to 1 replicas
	I0416 17:57:41.920343    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:41.920343    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:41.920343    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:41.920343    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:41.925445    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 17:57:41.925445    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:41.925445    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:42 GMT
	I0416 17:57:41.925445    6988 round_trippers.go:580]     Audit-Id: 7df7d5cd-8d90-47e3-a620-e333515b8855
	I0416 17:57:41.925445    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:41.925445    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:41.925445    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:41.925445    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:41.927690    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:42.389093    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:57:42.389178    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:42.389320    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:57:42.390035    6988 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:57:42.389320    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:42.390775    6988 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 17:57:42.390775    6988 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 17:57:42.390840    6988 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 17:57:42.390906    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:57:42.391435    6988 kapi.go:59] client config for multinode-945500: &rest.Config{Host:"https://172.19.91.227:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef16c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 17:57:42.392060    6988 addons.go:234] Setting addon default-storageclass=true in "multinode-945500"
	I0416 17:57:42.392151    6988 host.go:66] Checking if "multinode-945500" exists ...
	I0416 17:57:42.393041    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:57:42.412561    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:42.412743    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:42.412743    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:42.412743    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:42.419056    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 17:57:42.419366    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:42.419366    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:42.419366    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:42.419366    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:42.419366    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:42.419366    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:42 GMT
	I0416 17:57:42.419366    6988 round_trippers.go:580]     Audit-Id: b3f3bd38-d9b8-462a-9951-d6845f4c1e8b
	I0416 17:57:42.419606    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:42.919136    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:42.919136    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:42.919136    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:42.919136    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:42.922770    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:42.923481    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:42.923481    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:43 GMT
	I0416 17:57:42.923481    6988 round_trippers.go:580]     Audit-Id: 0619e710-cc23-453b-93b8-902006c18fd4
	I0416 17:57:42.923481    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:42.923481    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:42.923481    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:42.923481    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:42.924373    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:42.924671    6988 node_ready.go:53] node "multinode-945500" has status "Ready":"False"
	I0416 17:57:43.422289    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:43.422289    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:43.422289    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:43.422289    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:43.426297    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:43.426759    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:43.426855    6988 round_trippers.go:580]     Audit-Id: 3881c6f2-0168-43dd-afc5-e5828acf3c8d
	I0416 17:57:43.426855    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:43.426855    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:43.426855    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:43.426936    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:43.426936    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:43 GMT
	I0416 17:57:43.427005    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:43.912103    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:43.912103    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:43.912103    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:43.912103    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:43.915707    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:43.916753    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:43.916753    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:44 GMT
	I0416 17:57:43.916753    6988 round_trippers.go:580]     Audit-Id: 5c816ab6-0256-4da7-8677-2eed63915566
	I0416 17:57:43.916782    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:43.916782    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:43.916782    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:43.916782    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:43.917611    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:44.422232    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:44.422232    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:44.422232    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:44.422232    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:44.425983    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:44.426131    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:44.426131    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:44.426131    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:44 GMT
	I0416 17:57:44.426209    6988 round_trippers.go:580]     Audit-Id: 9338168a-3808-4f3d-8a58-744d48096dc5
	I0416 17:57:44.426209    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:44.426209    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:44.426209    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:44.426209    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:44.514747    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:57:44.514747    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:44.515754    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:57:44.517753    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:57:44.517753    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:44.517753    6988 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 17:57:44.517753    6988 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 17:57:44.517753    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 17:57:44.911211    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:44.911456    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:44.911456    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:44.911456    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:44.915270    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:44.915270    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:44.915270    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:44.915270    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:44.915270    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:44.915270    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:44.915270    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:45 GMT
	I0416 17:57:44.915270    6988 round_trippers.go:580]     Audit-Id: 4c85a024-69e3-42e3-8a96-0b4369f957e4
	I0416 17:57:44.916208    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:45.417189    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:45.417189    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:45.417189    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:45.417189    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:45.424768    6988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0416 17:57:45.424768    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:45.424768    6988 round_trippers.go:580]     Audit-Id: 0310038d-76b3-4992-9ac3-7533f23a7d71
	I0416 17:57:45.424768    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:45.424768    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:45.424768    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:45.424768    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:45.424768    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:45 GMT
	I0416 17:57:45.425371    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:45.425371    6988 node_ready.go:53] node "multinode-945500" has status "Ready":"False"
	I0416 17:57:45.923330    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:45.923330    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:45.923330    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:45.923330    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:45.925920    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:45.925920    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:45.926718    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:45.926718    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:45.926718    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:46 GMT
	I0416 17:57:45.926718    6988 round_trippers.go:580]     Audit-Id: 97c2ee9c-f0ff-43e0-b2a8-48327b90a95f
	I0416 17:57:45.926718    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:45.926718    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:45.927203    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:46.418033    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:46.418033    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:46.418033    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:46.418033    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:46.501786    6988 round_trippers.go:574] Response Status: 200 OK in 83 milliseconds
	I0416 17:57:46.501786    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:46.501786    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:46 GMT
	I0416 17:57:46.501786    6988 round_trippers.go:580]     Audit-Id: 7df6f9f0-10ff-4db8-bfad-3fc7f1364386
	I0416 17:57:46.501786    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:46.501786    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:46.501905    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:46.501905    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:46.503216    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:46.635075    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:57:46.635075    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:46.635935    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 17:57:46.921581    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:46.921653    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:46.921653    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:46.921720    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:46.924533    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:46.924533    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:46.924758    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:46.924758    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:46.924758    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:46.924758    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:46.924758    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:47 GMT
	I0416 17:57:46.924758    6988 round_trippers.go:580]     Audit-Id: e78831c8-f850-4752-a899-e59b21c78198
	I0416 17:57:46.924832    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:46.982609    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:57:46.982609    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:46.982609    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 17:57:47.140657    6988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 17:57:47.423704    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:47.423704    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:47.423704    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:47.423704    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:47.427881    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:47.428047    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:47.428047    6988 round_trippers.go:580]     Audit-Id: 23292552-c2df-4084-b58f-d36e231163f8
	I0416 17:57:47.428047    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:47.428047    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:47.428047    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:47.428047    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:47.428047    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:47 GMT
	I0416 17:57:47.428436    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:47.428909    6988 node_ready.go:53] node "multinode-945500" has status "Ready":"False"
	I0416 17:57:47.642156    6988 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0416 17:57:47.642156    6988 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0416 17:57:47.642263    6988 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0416 17:57:47.642263    6988 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0416 17:57:47.642263    6988 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0416 17:57:47.642352    6988 command_runner.go:130] > pod/storage-provisioner created
	I0416 17:57:47.915174    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:47.915174    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:47.915174    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:47.915174    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:47.919802    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:47.919802    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:47.919802    6988 round_trippers.go:580]     Audit-Id: 695031a3-c73c-4762-a80a-ead4be6d3a90
	I0416 17:57:47.919802    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:47.919802    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:47.919802    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:47.919802    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:47.919802    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:48 GMT
	I0416 17:57:47.921798    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:48.424055    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:48.424122    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:48.424122    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:48.424122    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:48.427517    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:48.427517    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:48.427517    6988 round_trippers.go:580]     Audit-Id: 7545d9c7-2c95-4fab-863b-976fb672f07e
	I0416 17:57:48.427517    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:48.427517    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:48.427517    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:48.427517    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:48.427517    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:48 GMT
	I0416 17:57:48.428336    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:48.912182    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:48.912285    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:48.912285    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:48.912285    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:48.915718    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:48.915718    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:48.915718    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:48.915718    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:48.915718    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:49 GMT
	I0416 17:57:48.915718    6988 round_trippers.go:580]     Audit-Id: 2263b32c-d20d-46cd-879e-9105b86a7194
	I0416 17:57:48.915718    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:48.915718    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:48.916253    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:49.012275    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 17:57:49.012444    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:49.012783    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 17:57:49.142232    6988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 17:57:49.275828    6988 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0416 17:57:49.276194    6988 round_trippers.go:463] GET https://172.19.91.227:8443/apis/storage.k8s.io/v1/storageclasses
	I0416 17:57:49.276271    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:49.276271    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:49.276381    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:49.279132    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:49.279132    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:49.279132    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:49.279132    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:49.279132    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:49.279132    6988 round_trippers.go:580]     Content-Length: 1273
	I0416 17:57:49.279132    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:49 GMT
	I0416 17:57:49.279397    6988 round_trippers.go:580]     Audit-Id: b06ff280-6eac-43c1-91fe-e3ebbad21f66
	I0416 17:57:49.279397    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:49.279397    6988 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"standard","uid":"fad243f1-4905-48ae-985d-d89cda0607a0","resourceVersion":"419","creationTimestamp":"2024-04-16T17:57:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-16T17:57:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0416 17:57:49.279545    6988 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"fad243f1-4905-48ae-985d-d89cda0607a0","resourceVersion":"419","creationTimestamp":"2024-04-16T17:57:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-16T17:57:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0416 17:57:49.279545    6988 round_trippers.go:463] PUT https://172.19.91.227:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0416 17:57:49.280079    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:49.280079    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:49.280079    6988 round_trippers.go:473]     Content-Type: application/json
	I0416 17:57:49.280122    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:49.283131    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:49.283131    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:49.283131    6988 round_trippers.go:580]     Audit-Id: 58e327bf-d681-4c51-8630-376535cfdae0
	I0416 17:57:49.283131    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:49.283131    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:49.283131    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:49.283131    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:49.283131    6988 round_trippers.go:580]     Content-Length: 1220
	I0416 17:57:49.283131    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:49 GMT
	I0416 17:57:49.283131    6988 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"fad243f1-4905-48ae-985d-d89cda0607a0","resourceVersion":"419","creationTimestamp":"2024-04-16T17:57:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-16T17:57:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0416 17:57:49.284142    6988 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0416 17:57:49.285110    6988 addons.go:505] duration metric: took 8.9631309s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0416 17:57:49.413824    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:49.413824    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:49.413824    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:49.413824    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:49.420066    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 17:57:49.420066    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:49.420066    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:49 GMT
	I0416 17:57:49.420066    6988 round_trippers.go:580]     Audit-Id: 673fcfb7-e79c-42ba-abaf-e828c3df7a7a
	I0416 17:57:49.420066    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:49.420066    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:49.420066    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:49.420066    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:49.420066    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:49.915557    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:49.915632    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:49.915632    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:49.915632    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:49.920023    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:49.920023    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:49.920023    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:49.920023    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:49.920023    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:49.920023    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:50 GMT
	I0416 17:57:49.920023    6988 round_trippers.go:580]     Audit-Id: cb813c2c-6bb9-41d0-a192-81d5df39cc31
	I0416 17:57:49.920023    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:49.920752    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"331","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4935 chars]
	I0416 17:57:49.920881    6988 node_ready.go:53] node "multinode-945500" has status "Ready":"False"
	I0416 17:57:50.414309    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:50.414309    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:50.414309    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:50.414309    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:50.421246    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 17:57:50.421246    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:50.421246    6988 round_trippers.go:580]     Audit-Id: 9a47d54e-a489-4e7c-8e6e-1768c6e24a06
	I0416 17:57:50.421246    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:50.421246    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:50.421246    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:50.421246    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:50.421246    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:50 GMT
	I0416 17:57:50.421586    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:50.422041    6988 node_ready.go:49] node "multinode-945500" has status "Ready":"True"
	I0416 17:57:50.422127    6988 node_ready.go:38] duration metric: took 9.5128501s for node "multinode-945500" to be "Ready" ...
	I0416 17:57:50.422127    6988 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 17:57:50.422288    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods
	I0416 17:57:50.422288    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:50.422288    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:50.422352    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:50.426293    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:50.426293    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:50.426293    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:50 GMT
	I0416 17:57:50.426293    6988 round_trippers.go:580]     Audit-Id: 13196519-ea29-4856-beaa-5c943f886806
	I0416 17:57:50.426293    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:50.426293    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:50.426645    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:50.426645    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:50.427551    6988 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"427","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56336 chars]
	I0416 17:57:50.432315    6988 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-86z7h" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:50.432315    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 17:57:50.432315    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:50.432315    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:50.432315    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:50.435446    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:50.435446    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:50.435446    6988 round_trippers.go:580]     Audit-Id: 0da838d3-4490-46a7-8d52-0929abb29d06
	I0416 17:57:50.435446    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:50.435446    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:50.435446    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:50.435446    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:50.435446    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:50 GMT
	I0416 17:57:50.435667    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"427","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0416 17:57:50.436341    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:50.436417    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:50.436417    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:50.436417    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:50.441670    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 17:57:50.441670    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:50.441670    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:50 GMT
	I0416 17:57:50.441670    6988 round_trippers.go:580]     Audit-Id: 7f63ee25-4ff7-418f-b7b2-b71003d58b29
	I0416 17:57:50.441670    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:50.441670    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:50.441670    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:50.441670    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:50.441670    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:50.933620    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 17:57:50.933620    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:50.933620    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:50.933620    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:50.936638    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:50.936638    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:50.937401    6988 round_trippers.go:580]     Audit-Id: 61428305-720d-4f2d-9189-d4c9892ef7e3
	I0416 17:57:50.937401    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:50.937401    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:50.937401    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:50.937401    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:50.937401    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:51 GMT
	I0416 17:57:50.937680    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"427","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0416 17:57:50.938372    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:50.938438    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:50.938438    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:50.938438    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:50.940646    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:50.940646    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:50.940646    6988 round_trippers.go:580]     Audit-Id: 62d4cd2d-a2dc-447d-8fe8-0ab2e8469374
	I0416 17:57:50.940646    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:50.940646    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:50.940646    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:50.940646    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:50.940646    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:51 GMT
	I0416 17:57:50.941893    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:51.436888    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 17:57:51.436973    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:51.437057    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:51.437057    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:51.440468    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:51.440468    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:51.440468    6988 round_trippers.go:580]     Audit-Id: 854d513c-8ed8-40d2-a6f4-c3ce631c5044
	I0416 17:57:51.440468    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:51.440468    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:51.440468    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:51.440468    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:51.440468    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:51 GMT
	I0416 17:57:51.441473    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"427","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0416 17:57:51.442446    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:51.442513    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:51.442513    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:51.442513    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:51.448074    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 17:57:51.448074    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:51.448074    6988 round_trippers.go:580]     Audit-Id: ea821fd7-5bb9-4fc8-adab-1d7de329d33c
	I0416 17:57:51.448074    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:51.448074    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:51.448074    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:51.448074    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:51.448074    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:51 GMT
	I0416 17:57:51.448761    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:51.936346    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 17:57:51.936438    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:51.936438    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:51.936438    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:51.940774    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:51.940774    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:51.940774    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:51.940774    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:51.940774    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:51.940774    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:51.940774    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:51.940774    6988 round_trippers.go:580]     Audit-Id: 39edef38-eddb-4269-abe8-a908e1d21987
	I0416 17:57:51.941262    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"427","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0416 17:57:51.941999    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:51.942068    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:51.942068    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:51.942068    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:51.944728    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:51.944728    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:51.944728    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:51.944728    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:51.945637    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:51.945637    6988 round_trippers.go:580]     Audit-Id: e9f648f9-92bc-4242-8c2c-17b661038154
	I0416 17:57:51.945637    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:51.945637    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:51.945961    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:52.434152    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 17:57:52.434152    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.434152    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.434152    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.438737    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:52.438737    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.438905    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.438905    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.438905    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.438905    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.438905    6988 round_trippers.go:580]     Audit-Id: 64fc4c09-2c08-4c20-886d-b65cc89badc2
	I0416 17:57:52.438905    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.439311    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"441","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0416 17:57:52.440372    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:52.440372    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.440471    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.440471    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.442800    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:52.442800    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.442800    6988 round_trippers.go:580]     Audit-Id: 69a074dd-0323-4dfd-a4d9-2a31cf93ae57
	I0416 17:57:52.442800    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.442800    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.442800    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.442800    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.442800    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.443974    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:52.444376    6988 pod_ready.go:92] pod "coredns-76f75df574-86z7h" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:52.444463    6988 pod_ready.go:81] duration metric: took 2.0119463s for pod "coredns-76f75df574-86z7h" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.444463    6988 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.444559    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-945500
	I0416 17:57:52.444559    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.444559    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.444559    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.448264    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:52.448675    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.448709    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.448709    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.448709    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.448709    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.448709    6988 round_trippers.go:580]     Audit-Id: 6a1f3697-4191-47e0-93ea-8556479112b5
	I0416 17:57:52.448709    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.448895    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-945500","namespace":"kube-system","uid":"245cef70-3506-471b-9bf6-dd14a2c23d8c","resourceVersion":"372","creationTimestamp":"2024-04-16T17:57:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.91.227:2379","kubernetes.io/config.hash":"c735a32dacf9631b2b4787fe99cff316","kubernetes.io/config.mirror":"c735a32dacf9631b2b4787fe99cff316","kubernetes.io/config.seen":"2024-04-16T17:57:28.101466445Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0416 17:57:52.449544    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:52.449618    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.449618    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.449618    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.457774    6988 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0416 17:57:52.457774    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.457774    6988 round_trippers.go:580]     Audit-Id: 6aa9935f-5cde-4c2d-90c1-770e6d9b42ec
	I0416 17:57:52.457774    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.457774    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.457774    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.457774    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.457774    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.457774    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:52.457774    6988 pod_ready.go:92] pod "etcd-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:52.457774    6988 pod_ready.go:81] duration metric: took 13.3102ms for pod "etcd-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.458783    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.458817    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-945500
	I0416 17:57:52.458817    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.458817    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.458817    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.462379    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:52.462379    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.462379    6988 round_trippers.go:580]     Audit-Id: 3d6fa3f7-ff7f-4322-a2e8-b5a0c4fb1daf
	I0416 17:57:52.462379    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.462379    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.462379    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.462379    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.462379    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.462379    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-945500","namespace":"kube-system","uid":"c6ae969a-de5d-4c7e-af09-b1a5eb21f2ab","resourceVersion":"314","creationTimestamp":"2024-04-16T17:57:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.91.227:8443","kubernetes.io/config.hash":"564fae5a3e5851c815d6092b123a5395","kubernetes.io/config.mirror":"564fae5a3e5851c815d6092b123a5395","kubernetes.io/config.seen":"2024-04-16T17:57:28.101471746Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0416 17:57:52.464244    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:52.464374    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.464374    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.464374    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.466690    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:52.466690    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.466690    6988 round_trippers.go:580]     Audit-Id: d3396616-a825-4d83-94f7-1691134d1559
	I0416 17:57:52.466690    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.466690    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.466690    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.466690    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.466690    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.467128    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:52.467128    6988 pod_ready.go:92] pod "kube-apiserver-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:52.467128    6988 pod_ready.go:81] duration metric: took 8.3444ms for pod "kube-apiserver-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.467128    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.467128    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-945500
	I0416 17:57:52.467655    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.467655    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.467655    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.469965    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:52.469965    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.469965    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.469965    6988 round_trippers.go:580]     Audit-Id: 69b40722-0130-4c39-98a1-4a3e7990d75a
	I0416 17:57:52.469965    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.469965    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.469965    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.469965    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.469965    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-945500","namespace":"kube-system","uid":"01b937c2-9827-4240-83f0-3536fec5eb5e","resourceVersion":"345","creationTimestamp":"2024-04-16T17:57:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5db71de2029227779432bddd337fc81d","kubernetes.io/config.mirror":"5db71de2029227779432bddd337fc81d","kubernetes.io/config.seen":"2024-04-16T17:57:28.101473146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0416 17:57:52.471692    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:52.471736    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.471736    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.471736    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.474312    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:52.474312    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.474312    6988 round_trippers.go:580]     Audit-Id: ef6911fd-c5b9-4c1a-85d8-6d4810547589
	I0416 17:57:52.474312    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.474312    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.474312    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.474312    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.474312    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.474842    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:52.475259    6988 pod_ready.go:92] pod "kube-controller-manager-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:52.475298    6988 pod_ready.go:81] duration metric: took 8.1314ms for pod "kube-controller-manager-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.475298    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rfxsg" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.475372    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfxsg
	I0416 17:57:52.475407    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.475446    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.475446    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.480328    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:52.480328    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.480328    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.480328    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.480328    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.480328    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.480328    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.480328    6988 round_trippers.go:580]     Audit-Id: 5505b192-812e-4b7d-b573-cc48b255735a
	I0416 17:57:52.480328    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rfxsg","generateName":"kube-proxy-","namespace":"kube-system","uid":"b740e6e0-4768-4dd4-a958-307662a92578","resourceVersion":"401","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"83f1bde2-7175-4a0f-944e-61200d7e7177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83f1bde2-7175-4a0f-944e-61200d7e7177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0416 17:57:52.480969    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:52.480969    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.480969    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.480969    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.484123    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:52.484123    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.484123    6988 round_trippers.go:580]     Audit-Id: 242d2743-3177-42b4-9e74-5bce35db3f1d
	I0416 17:57:52.484123    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.484123    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.484123    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.484123    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.484123    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.484955    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:52.485557    6988 pod_ready.go:92] pod "kube-proxy-rfxsg" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:52.485602    6988 pod_ready.go:81] duration metric: took 10.2584ms for pod "kube-proxy-rfxsg" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.485602    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.638123    6988 request.go:629] Waited for 152.4159ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-945500
	I0416 17:57:52.638123    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-945500
	I0416 17:57:52.638123    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.638123    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.638123    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.642880    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:52.642880    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.642880    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.642880    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.642880    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.642880    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:52 GMT
	I0416 17:57:52.642880    6988 round_trippers.go:580]     Audit-Id: 8f2e930a-7531-48ab-83eb-71103cec3dde
	I0416 17:57:52.642880    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.642880    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-945500","namespace":"kube-system","uid":"a09e52e8-1ac2-4c22-8a3d-57969fae85a9","resourceVersion":"310","creationTimestamp":"2024-04-16T17:57:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4ebc73a23d79d1dece7469fd94c931d1","kubernetes.io/config.mirror":"4ebc73a23d79d1dece7469fd94c931d1","kubernetes.io/config.seen":"2024-04-16T17:57:20.694761708Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0416 17:57:52.840231    6988 request.go:629] Waited for 196.2271ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:52.840540    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 17:57:52.840540    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.840640    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.840640    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.845870    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 17:57:52.845870    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.845870    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.845870    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.845870    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.845870    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.845870    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:53 GMT
	I0416 17:57:52.845870    6988 round_trippers.go:580]     Audit-Id: 05acaca5-b7c1-4fab-9ace-d775a055e4f5
	I0416 17:57:52.846425    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4790 chars]
	I0416 17:57:52.846879    6988 pod_ready.go:92] pod "kube-scheduler-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:52.846957    6988 pod_ready.go:81] duration metric: took 361.3343ms for pod "kube-scheduler-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:52.846957    6988 pod_ready.go:38] duration metric: took 2.4246918s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 17:57:52.846957    6988 api_server.go:52] waiting for apiserver process to appear ...
	I0416 17:57:52.859063    6988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:57:52.885312    6988 command_runner.go:130] > 2058
	I0416 17:57:52.885400    6988 api_server.go:72] duration metric: took 12.562985s to wait for apiserver process to appear ...
	I0416 17:57:52.885400    6988 api_server.go:88] waiting for apiserver healthz status ...
	I0416 17:57:52.885400    6988 api_server.go:253] Checking apiserver healthz at https://172.19.91.227:8443/healthz ...
	I0416 17:57:52.898178    6988 api_server.go:279] https://172.19.91.227:8443/healthz returned 200:
	ok
	I0416 17:57:52.898356    6988 round_trippers.go:463] GET https://172.19.91.227:8443/version
	I0416 17:57:52.898430    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:52.898430    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:52.898463    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:52.900671    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 17:57:52.900731    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:52.900731    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:52.900731    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:52.900731    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:52.900731    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:52.900731    6988 round_trippers.go:580]     Content-Length: 263
	I0416 17:57:52.900731    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:53 GMT
	I0416 17:57:52.900731    6988 round_trippers.go:580]     Audit-Id: 23327aeb-4415-44a9-ac4c-ac1fb850d1c4
	I0416 17:57:52.900731    6988 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "29",
	  "gitVersion": "v1.29.3",
	  "gitCommit": "6813625b7cd706db5bc7388921be03071e1a492d",
	  "gitTreeState": "clean",
	  "buildDate": "2024-03-14T23:58:36Z",
	  "goVersion": "go1.21.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0416 17:57:52.900731    6988 api_server.go:141] control plane version: v1.29.3
	I0416 17:57:52.900731    6988 api_server.go:131] duration metric: took 15.3302ms to wait for apiserver health ...
	I0416 17:57:52.900731    6988 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 17:57:53.042203    6988 request.go:629] Waited for 141.464ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods
	I0416 17:57:53.042203    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods
	I0416 17:57:53.042203    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:53.042203    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:53.042203    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:53.047811    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 17:57:53.047811    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:53.047931    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:53.047931    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:53.047931    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:53.047931    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:53.047931    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:53 GMT
	I0416 17:57:53.047931    6988 round_trippers.go:580]     Audit-Id: 0112d2ef-1059-4960-9329-11966d09c0ed
	I0416 17:57:53.050025    6988 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"445"},"items":[{"metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"441","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56450 chars]
	I0416 17:57:53.056232    6988 system_pods.go:59] 8 kube-system pods found
	I0416 17:57:53.056303    6988 system_pods.go:61] "coredns-76f75df574-86z7h" [1ca004a0-0575-4576-a5ed-ba0891a7d277] Running
	I0416 17:57:53.056303    6988 system_pods.go:61] "etcd-multinode-945500" [245cef70-3506-471b-9bf6-dd14a2c23d8c] Running
	I0416 17:57:53.056303    6988 system_pods.go:61] "kindnet-tp7jl" [91595b62-10ee-47cb-a0c9-2ca83ad70af7] Running
	I0416 17:57:53.056303    6988 system_pods.go:61] "kube-apiserver-multinode-945500" [c6ae969a-de5d-4c7e-af09-b1a5eb21f2ab] Running
	I0416 17:57:53.056303    6988 system_pods.go:61] "kube-controller-manager-multinode-945500" [01b937c2-9827-4240-83f0-3536fec5eb5e] Running
	I0416 17:57:53.056303    6988 system_pods.go:61] "kube-proxy-rfxsg" [b740e6e0-4768-4dd4-a958-307662a92578] Running
	I0416 17:57:53.056303    6988 system_pods.go:61] "kube-scheduler-multinode-945500" [a09e52e8-1ac2-4c22-8a3d-57969fae85a9] Running
	I0416 17:57:53.056303    6988 system_pods.go:61] "storage-provisioner" [3bd5cc95-eef6-473e-b6f9-898568046f1b] Running
	I0416 17:57:53.056378    6988 system_pods.go:74] duration metric: took 155.5639ms to wait for pod list to return data ...
	I0416 17:57:53.056378    6988 default_sa.go:34] waiting for default service account to be created ...
	I0416 17:57:53.242714    6988 request.go:629] Waited for 186.2414ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/namespaces/default/serviceaccounts
	I0416 17:57:53.242956    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/default/serviceaccounts
	I0416 17:57:53.242956    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:53.243091    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:53.243091    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:53.246460    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:53.246460    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:53.246962    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:53.246962    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:53.246962    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:53.246962    6988 round_trippers.go:580]     Content-Length: 261
	I0416 17:57:53.246962    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:53 GMT
	I0416 17:57:53.246962    6988 round_trippers.go:580]     Audit-Id: da3e035a-782e-4d26-b641-e9ec06113208
	I0416 17:57:53.246962    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:53.247049    6988 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"26260d2a-9800-4f2e-87ba-a34049d52e3f","resourceVersion":"332","creationTimestamp":"2024-04-16T17:57:40Z"}}]}
	I0416 17:57:53.247481    6988 default_sa.go:45] found service account: "default"
	I0416 17:57:53.247563    6988 default_sa.go:55] duration metric: took 191.174ms for default service account to be created ...
	I0416 17:57:53.247563    6988 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 17:57:53.445373    6988 request.go:629] Waited for 197.6083ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods
	I0416 17:57:53.445373    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods
	I0416 17:57:53.445373    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:53.445373    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:53.445373    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:53.453613    6988 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0416 17:57:53.453613    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:53.453613    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:53.453613    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:53.453613    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:53.453613    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:53.453613    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:53 GMT
	I0416 17:57:53.453613    6988 round_trippers.go:580]     Audit-Id: a54cbc48-ccbf-4ab0-b75f-121f6c3ab39c
	I0416 17:57:53.454598    6988 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"441","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56450 chars]
	I0416 17:57:53.457215    6988 system_pods.go:86] 8 kube-system pods found
	I0416 17:57:53.457215    6988 system_pods.go:89] "coredns-76f75df574-86z7h" [1ca004a0-0575-4576-a5ed-ba0891a7d277] Running
	I0416 17:57:53.457215    6988 system_pods.go:89] "etcd-multinode-945500" [245cef70-3506-471b-9bf6-dd14a2c23d8c] Running
	I0416 17:57:53.457215    6988 system_pods.go:89] "kindnet-tp7jl" [91595b62-10ee-47cb-a0c9-2ca83ad70af7] Running
	I0416 17:57:53.457215    6988 system_pods.go:89] "kube-apiserver-multinode-945500" [c6ae969a-de5d-4c7e-af09-b1a5eb21f2ab] Running
	I0416 17:57:53.457215    6988 system_pods.go:89] "kube-controller-manager-multinode-945500" [01b937c2-9827-4240-83f0-3536fec5eb5e] Running
	I0416 17:57:53.457215    6988 system_pods.go:89] "kube-proxy-rfxsg" [b740e6e0-4768-4dd4-a958-307662a92578] Running
	I0416 17:57:53.457215    6988 system_pods.go:89] "kube-scheduler-multinode-945500" [a09e52e8-1ac2-4c22-8a3d-57969fae85a9] Running
	I0416 17:57:53.457215    6988 system_pods.go:89] "storage-provisioner" [3bd5cc95-eef6-473e-b6f9-898568046f1b] Running
	I0416 17:57:53.457215    6988 system_pods.go:126] duration metric: took 209.6402ms to wait for k8s-apps to be running ...
	I0416 17:57:53.457215    6988 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 17:57:53.465993    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 17:57:53.490843    6988 system_svc.go:56] duration metric: took 32.799ms WaitForService to wait for kubelet
	I0416 17:57:53.490843    6988 kubeadm.go:576] duration metric: took 13.1684808s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 17:57:53.490945    6988 node_conditions.go:102] verifying NodePressure condition ...
	I0416 17:57:53.646796    6988 request.go:629] Waited for 155.5885ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/nodes
	I0416 17:57:53.647092    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes
	I0416 17:57:53.647092    6988 round_trippers.go:469] Request Headers:
	I0416 17:57:53.647092    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 17:57:53.647092    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 17:57:53.650750    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 17:57:53.650750    6988 round_trippers.go:577] Response Headers:
	I0416 17:57:53.650750    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 17:57:53.650750    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 17:57:53.650750    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 17:57:53 GMT
	I0416 17:57:53.651249    6988 round_trippers.go:580]     Audit-Id: a39fa908-8f98-49bc-a6db-1564faa14911
	I0416 17:57:53.651249    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 17:57:53.651249    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 17:57:53.651424    6988 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"422","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"mana
gedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1" [truncated 4843 chars]
	I0416 17:57:53.651922    6988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 17:57:53.651922    6988 node_conditions.go:123] node cpu capacity is 2
	I0416 17:57:53.651922    6988 node_conditions.go:105] duration metric: took 160.9684ms to run NodePressure ...
	I0416 17:57:53.652035    6988 start.go:240] waiting for startup goroutines ...
	I0416 17:57:53.652035    6988 start.go:245] waiting for cluster config update ...
	I0416 17:57:53.652035    6988 start.go:254] writing updated cluster config ...
	I0416 17:57:53.653564    6988 out.go:177] 
	I0416 17:57:53.669380    6988 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 17:57:53.669380    6988 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 17:57:53.672905    6988 out.go:177] * Starting "multinode-945500-m02" worker node in "multinode-945500" cluster
	I0416 17:57:53.673088    6988 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 17:57:53.673617    6988 cache.go:56] Caching tarball of preloaded images
	I0416 17:57:53.673750    6988 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 17:57:53.673750    6988 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 17:57:53.674279    6988 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 17:57:53.682401    6988 start.go:360] acquireMachinesLock for multinode-945500-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 17:57:53.682401    6988 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-945500-m02"
	I0416 17:57:53.682989    6988 start.go:93] Provisioning new machine with config: &{Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.91.227 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0416 17:57:53.682989    6988 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0416 17:57:53.683581    6988 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 17:57:53.683581    6988 start.go:159] libmachine.API.Create for "multinode-945500" (driver="hyperv")
	I0416 17:57:53.683581    6988 client.go:168] LocalClient.Create starting
	I0416 17:57:53.684171    6988 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0416 17:57:53.684171    6988 main.go:141] libmachine: Decoding PEM data...
	I0416 17:57:53.684730    6988 main.go:141] libmachine: Parsing certificate...
	I0416 17:57:53.684763    6988 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0416 17:57:53.684763    6988 main.go:141] libmachine: Decoding PEM data...
	I0416 17:57:53.684763    6988 main.go:141] libmachine: Parsing certificate...
	I0416 17:57:53.684763    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0416 17:57:55.392368    6988 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0416 17:57:55.392368    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:55.393364    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0416 17:57:56.931487    6988 main.go:141] libmachine: [stdout =====>] : False
	
	I0416 17:57:56.931487    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:56.931487    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 17:57:58.272841    6988 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 17:57:58.273519    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:57:58.273519    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 17:58:01.537799    6988 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 17:58:01.537799    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:01.539609    6988 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 17:58:01.848885    6988 main.go:141] libmachine: Creating SSH key...
	I0416 17:58:02.010218    6988 main.go:141] libmachine: Creating VM...
	I0416 17:58:02.011217    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0416 17:58:04.625040    6988 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0416 17:58:04.625040    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:04.625917    6988 main.go:141] libmachine: Using switch "Default Switch"
	I0416 17:58:04.625917    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0416 17:58:06.258751    6988 main.go:141] libmachine: [stdout =====>] : True
	
	I0416 17:58:06.258751    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:06.258751    6988 main.go:141] libmachine: Creating VHD
	I0416 17:58:06.258751    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0416 17:58:09.852420    6988 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : C09A8F8B-563A-41CF-AB1F-9B4C422F3FC9
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0416 17:58:09.852568    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:09.852568    6988 main.go:141] libmachine: Writing magic tar header
	I0416 17:58:09.852638    6988 main.go:141] libmachine: Writing SSH key tar header
	I0416 17:58:09.862039    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0416 17:58:12.878751    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:12.878751    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:12.878751    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\disk.vhd' -SizeBytes 20000MB
	I0416 17:58:15.237605    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:15.237605    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:15.237605    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-945500-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0416 17:58:18.410858    6988 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-945500-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0416 17:58:18.411873    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:18.411914    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-945500-m02 -DynamicMemoryEnabled $false
	I0416 17:58:20.486445    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:20.486524    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:20.486600    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-945500-m02 -Count 2
	I0416 17:58:22.474057    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:22.474057    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:22.474057    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-945500-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\boot2docker.iso'
	I0416 17:58:24.877959    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:24.877959    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:24.878134    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-945500-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\disk.vhd'
	I0416 17:58:27.308442    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:27.309253    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:27.309253    6988 main.go:141] libmachine: Starting VM...
	I0416 17:58:27.309346    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-945500-m02
	I0416 17:58:29.937973    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:29.937973    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:29.937973    6988 main.go:141] libmachine: Waiting for host to start...
	I0416 17:58:29.938140    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:58:32.040669    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:58:32.040669    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:32.040763    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:58:34.346849    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:34.346849    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:35.361237    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:58:37.380851    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:58:37.380851    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:37.381523    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:58:39.667097    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:39.667097    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:40.670143    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:58:42.688257    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:58:42.688257    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:42.688328    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:58:44.946196    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:44.946196    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:45.948919    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:58:47.976127    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:58:47.976127    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:47.976535    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:58:50.265300    6988 main.go:141] libmachine: [stdout =====>] : 
	I0416 17:58:50.265477    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:51.278063    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:58:53.353234    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:58:53.353234    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:53.353542    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:58:55.731097    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:58:55.731585    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:55.731648    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:58:57.706259    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:58:57.706259    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:57.706259    6988 machine.go:94] provisionDockerMachine start ...
	I0416 17:58:57.706337    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:58:59.674406    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:58:59.674406    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:58:59.675593    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:01.982982    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:01.982982    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:01.989231    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:59:02.000855    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.6 22 <nil> <nil>}
	I0416 17:59:02.000855    6988 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 17:59:02.131967    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 17:59:02.132116    6988 buildroot.go:166] provisioning hostname "multinode-945500-m02"
	I0416 17:59:02.132244    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:04.030355    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:04.031102    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:04.031102    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:06.380424    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:06.380424    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:06.385493    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:59:06.385574    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.6 22 <nil> <nil>}
	I0416 17:59:06.385574    6988 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-945500-m02 && echo "multinode-945500-m02" | sudo tee /etc/hostname
	I0416 17:59:06.536173    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-945500-m02
	
	I0416 17:59:06.536238    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:08.514008    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:08.514084    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:08.514108    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:10.867331    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:10.867331    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:10.872002    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:59:10.872167    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.6 22 <nil> <nil>}
	I0416 17:59:10.872167    6988 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-945500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-945500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-945500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 17:59:11.029689    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 17:59:11.029689    6988 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 17:59:11.029689    6988 buildroot.go:174] setting up certificates
	I0416 17:59:11.029689    6988 provision.go:84] configureAuth start
	I0416 17:59:11.029689    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:13.049800    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:13.050575    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:13.050646    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:15.359589    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:15.359589    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:15.359846    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:17.299020    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:17.299020    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:17.300075    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:19.605590    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:19.605590    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:19.605590    6988 provision.go:143] copyHostCerts
	I0416 17:59:19.605792    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 17:59:19.606057    6988 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 17:59:19.606057    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 17:59:19.606675    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 17:59:19.607815    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 17:59:19.608147    6988 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 17:59:19.608226    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 17:59:19.608494    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 17:59:19.609301    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 17:59:19.609365    6988 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 17:59:19.609365    6988 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 17:59:19.609365    6988 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 17:59:19.610613    6988 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-945500-m02 san=[127.0.0.1 172.19.91.6 localhost minikube multinode-945500-m02]
	I0416 17:59:19.702929    6988 provision.go:177] copyRemoteCerts
	I0416 17:59:19.710522    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 17:59:19.710522    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:21.626659    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:21.626659    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:21.627629    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:23.970899    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:23.970899    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:23.971221    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\id_rsa Username:docker}
	I0416 17:59:24.079459    6988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3686883s)
	I0416 17:59:24.079459    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 17:59:24.080474    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 17:59:24.123694    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 17:59:24.124179    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0416 17:59:24.164830    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 17:59:24.165649    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 17:59:24.208692    6988 provision.go:87] duration metric: took 13.1782183s to configureAuth
	I0416 17:59:24.208692    6988 buildroot.go:189] setting minikube options for container-runtime
	I0416 17:59:24.209067    6988 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 17:59:24.209160    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:26.153425    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:26.153425    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:26.153714    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:28.507518    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:28.507518    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:28.511037    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:59:28.511634    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.6 22 <nil> <nil>}
	I0416 17:59:28.511634    6988 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 17:59:28.639516    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 17:59:28.639516    6988 buildroot.go:70] root file system type: tmpfs
	I0416 17:59:28.639516    6988 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 17:59:28.639516    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:30.530854    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:30.531013    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:30.531013    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:32.826918    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:32.826918    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:32.832383    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:59:32.832984    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.6 22 <nil> <nil>}
	I0416 17:59:32.832984    6988 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.91.227"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 17:59:32.992600    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.91.227
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 17:59:32.992774    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:34.963694    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:34.963694    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:34.963799    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:37.247922    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:37.247922    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:37.252024    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:59:37.252024    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.6 22 <nil> <nil>}
	I0416 17:59:37.252024    6988 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 17:59:39.216273    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 17:59:39.216273    6988 machine.go:97] duration metric: took 41.5076568s to provisionDockerMachine
	I0416 17:59:39.216367    6988 client.go:171] duration metric: took 1m45.5267916s to LocalClient.Create
	I0416 17:59:39.216420    6988 start.go:167] duration metric: took 1m45.5268452s to libmachine.API.Create "multinode-945500"
	I0416 17:59:39.216420    6988 start.go:293] postStartSetup for "multinode-945500-m02" (driver="hyperv")
	I0416 17:59:39.216420    6988 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 17:59:39.225464    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 17:59:39.225464    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:41.131652    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:41.131652    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:41.132015    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:43.445904    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:43.445904    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:43.446473    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\id_rsa Username:docker}
	I0416 17:59:43.549649    6988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3239396s)
	I0416 17:59:43.558710    6988 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 17:59:43.563635    6988 command_runner.go:130] > NAME=Buildroot
	I0416 17:59:43.563635    6988 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0416 17:59:43.563635    6988 command_runner.go:130] > ID=buildroot
	I0416 17:59:43.563635    6988 command_runner.go:130] > VERSION_ID=2023.02.9
	I0416 17:59:43.563635    6988 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0416 17:59:43.563635    6988 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 17:59:43.563635    6988 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 17:59:43.565096    6988 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 17:59:43.566332    6988 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 17:59:43.566332    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 17:59:43.575822    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 17:59:43.593251    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 17:59:43.635050    6988 start.go:296] duration metric: took 4.4183786s for postStartSetup
	I0416 17:59:43.637173    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:45.591586    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:45.591586    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:45.591966    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:47.994749    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:47.994749    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:47.994889    6988 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 17:59:47.996574    6988 start.go:128] duration metric: took 1m54.3070064s to createHost
	I0416 17:59:47.996664    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:49.890109    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:49.890109    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:49.890628    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:52.220872    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:52.220872    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:52.225852    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:59:52.226248    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.6 22 <nil> <nil>}
	I0416 17:59:52.226248    6988 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 17:59:52.368040    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713290392.538512769
	
	I0416 17:59:52.368040    6988 fix.go:216] guest clock: 1713290392.538512769
	I0416 17:59:52.368040    6988 fix.go:229] Guest: 2024-04-16 17:59:52.538512769 +0000 UTC Remote: 2024-04-16 17:59:47.9965749 +0000 UTC m=+309.651339801 (delta=4.541937869s)
	I0416 17:59:52.368159    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:54.442418    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:54.442507    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:54.442581    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 17:59:56.760874    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 17:59:56.760874    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:56.765985    6988 main.go:141] libmachine: Using SSH client type: native
	I0416 17:59:56.766627    6988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.91.6 22 <nil> <nil>}
	I0416 17:59:56.766627    6988 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713290392
	I0416 17:59:56.909969    6988 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 17:59:52 UTC 2024
	
	I0416 17:59:56.909969    6988 fix.go:236] clock set: Tue Apr 16 17:59:52 UTC 2024
	 (err=<nil>)
	I0416 17:59:56.909969    6988 start.go:83] releasing machines lock for "multinode-945500-m02", held for 2m3.2205685s
	I0416 17:59:56.909969    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 17:59:58.843464    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 17:59:58.843464    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 17:59:58.843546    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:00:01.159738    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 18:00:01.160789    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:00:01.160917    6988 out.go:177] * Found network options:
	I0416 18:00:01.161771    6988 out.go:177]   - NO_PROXY=172.19.91.227
	W0416 18:00:01.162783    6988 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 18:00:01.163550    6988 out.go:177]   - NO_PROXY=172.19.91.227
	W0416 18:00:01.163820    6988 proxy.go:119] fail to check proxy env: Error ip not in block
	W0416 18:00:01.165081    6988 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 18:00:01.167381    6988 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 18:00:01.167483    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:00:01.178390    6988 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0416 18:00:01.178390    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:00:03.244075    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:00:03.244075    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:00:03.244075    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:00:03.244075    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:00:03.244356    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:00:03.244356    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:00:05.758057    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 18:00:05.758057    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:00:05.758057    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\id_rsa Username:docker}
	I0416 18:00:05.784117    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 18:00:05.784117    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:00:05.784117    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\id_rsa Username:docker}
	I0416 18:00:05.960484    6988 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0416 18:00:05.960638    6988 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.7929841s)
	I0416 18:00:05.960638    6988 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0416 18:00:05.960638    6988 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.781976s)
	W0416 18:00:05.960638    6988 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 18:00:05.975053    6988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 18:00:06.012668    6988 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0416 18:00:06.012756    6988 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 18:00:06.012756    6988 start.go:494] detecting cgroup driver to use...
	I0416 18:00:06.012756    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 18:00:06.050850    6988 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0416 18:00:06.061001    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 18:00:06.091844    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 18:00:06.110783    6988 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 18:00:06.118610    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 18:00:06.144577    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 18:00:06.171490    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 18:00:06.198550    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 18:00:06.226893    6988 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 18:00:06.255518    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 18:00:06.285057    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 18:00:06.314136    6988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 18:00:06.344453    6988 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 18:00:06.362440    6988 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0416 18:00:06.374326    6988 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 18:00:06.400901    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:00:06.587114    6988 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 18:00:06.621553    6988 start.go:494] detecting cgroup driver to use...
	I0416 18:00:06.630654    6988 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 18:00:06.656160    6988 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0416 18:00:06.656235    6988 command_runner.go:130] > [Unit]
	I0416 18:00:06.656235    6988 command_runner.go:130] > Description=Docker Application Container Engine
	I0416 18:00:06.656235    6988 command_runner.go:130] > Documentation=https://docs.docker.com
	I0416 18:00:06.656235    6988 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0416 18:00:06.656235    6988 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0416 18:00:06.656235    6988 command_runner.go:130] > StartLimitBurst=3
	I0416 18:00:06.656235    6988 command_runner.go:130] > StartLimitIntervalSec=60
	I0416 18:00:06.656235    6988 command_runner.go:130] > [Service]
	I0416 18:00:06.656235    6988 command_runner.go:130] > Type=notify
	I0416 18:00:06.656235    6988 command_runner.go:130] > Restart=on-failure
	I0416 18:00:06.656235    6988 command_runner.go:130] > Environment=NO_PROXY=172.19.91.227
	I0416 18:00:06.656235    6988 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0416 18:00:06.656235    6988 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0416 18:00:06.656235    6988 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0416 18:00:06.656235    6988 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0416 18:00:06.656235    6988 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0416 18:00:06.656235    6988 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0416 18:00:06.656235    6988 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0416 18:00:06.656235    6988 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0416 18:00:06.656235    6988 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0416 18:00:06.656235    6988 command_runner.go:130] > ExecStart=
	I0416 18:00:06.656778    6988 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0416 18:00:06.656778    6988 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0416 18:00:06.656820    6988 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0416 18:00:06.656870    6988 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0416 18:00:06.656870    6988 command_runner.go:130] > LimitNOFILE=infinity
	I0416 18:00:06.656911    6988 command_runner.go:130] > LimitNPROC=infinity
	I0416 18:00:06.656911    6988 command_runner.go:130] > LimitCORE=infinity
	I0416 18:00:06.656911    6988 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0416 18:00:06.656911    6988 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0416 18:00:06.656911    6988 command_runner.go:130] > TasksMax=infinity
	I0416 18:00:06.656911    6988 command_runner.go:130] > TimeoutStartSec=0
	I0416 18:00:06.656911    6988 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0416 18:00:06.656911    6988 command_runner.go:130] > Delegate=yes
	I0416 18:00:06.656911    6988 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0416 18:00:06.656911    6988 command_runner.go:130] > KillMode=process
	I0416 18:00:06.656911    6988 command_runner.go:130] > [Install]
	I0416 18:00:06.656911    6988 command_runner.go:130] > WantedBy=multi-user.target
	I0416 18:00:06.666231    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 18:00:06.697894    6988 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 18:00:06.737622    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 18:00:06.771467    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 18:00:06.804240    6988 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 18:00:06.854175    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 18:00:06.875932    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 18:00:06.907847    6988 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0416 18:00:06.916941    6988 ssh_runner.go:195] Run: which cri-dockerd
	I0416 18:00:06.922573    6988 command_runner.go:130] > /usr/bin/cri-dockerd
	I0416 18:00:06.930663    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 18:00:06.948367    6988 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 18:00:06.987048    6988 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 18:00:07.191969    6988 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 18:00:07.382844    6988 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 18:00:07.382971    6988 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 18:00:07.425295    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:00:07.611967    6988 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 18:00:10.072387    6988 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.460242s)
	I0416 18:00:10.082602    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0416 18:00:10.120067    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 18:00:10.155302    6988 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0416 18:00:10.359234    6988 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0416 18:00:10.554817    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:00:10.747932    6988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0416 18:00:10.786544    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 18:00:10.819302    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:00:10.999957    6988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0416 18:00:11.099015    6988 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0416 18:00:11.111636    6988 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0416 18:00:11.122504    6988 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0416 18:00:11.122504    6988 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0416 18:00:11.122504    6988 command_runner.go:130] > Device: 0,22	Inode: 871         Links: 1
	I0416 18:00:11.122504    6988 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0416 18:00:11.122504    6988 command_runner.go:130] > Access: 2024-04-16 18:00:11.194886190 +0000
	I0416 18:00:11.122504    6988 command_runner.go:130] > Modify: 2024-04-16 18:00:11.194886190 +0000
	I0416 18:00:11.122504    6988 command_runner.go:130] > Change: 2024-04-16 18:00:11.200886564 +0000
	I0416 18:00:11.122504    6988 command_runner.go:130] >  Birth: -
	I0416 18:00:11.122504    6988 start.go:562] Will wait 60s for crictl version
	I0416 18:00:11.131362    6988 ssh_runner.go:195] Run: which crictl
	I0416 18:00:11.136657    6988 command_runner.go:130] > /usr/bin/crictl
	I0416 18:00:11.146046    6988 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 18:00:11.199867    6988 command_runner.go:130] > Version:  0.1.0
	I0416 18:00:11.199867    6988 command_runner.go:130] > RuntimeName:  docker
	I0416 18:00:11.199867    6988 command_runner.go:130] > RuntimeVersion:  26.0.1
	I0416 18:00:11.199867    6988 command_runner.go:130] > RuntimeApiVersion:  v1
	I0416 18:00:11.199867    6988 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0416 18:00:11.205859    6988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 18:00:11.237864    6988 command_runner.go:130] > 26.0.1
	I0416 18:00:11.245954    6988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 18:00:11.279233    6988 command_runner.go:130] > 26.0.1
	I0416 18:00:11.280642    6988 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
	I0416 18:00:11.281457    6988 out.go:177]   - env NO_PROXY=172.19.91.227
	I0416 18:00:11.282089    6988 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0416 18:00:11.285919    6988 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0416 18:00:11.285919    6988 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0416 18:00:11.285919    6988 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0416 18:00:11.285919    6988 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:6f:a4 Flags:up|broadcast|multicast|running}
	I0416 18:00:11.289016    6988 ip.go:210] interface addr: fe80::6b96:eca7:5afa:def5/64
	I0416 18:00:11.289092    6988 ip.go:210] interface addr: 172.19.80.1/20
	I0416 18:00:11.297335    6988 ssh_runner.go:195] Run: grep 172.19.80.1	host.minikube.internal$ /etc/hosts
	I0416 18:00:11.303557    6988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 18:00:11.324932    6988 mustload.go:65] Loading cluster: multinode-945500
	I0416 18:00:11.324932    6988 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:00:11.326302    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:00:13.285643    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:00:13.285643    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:00:13.285643    6988 host.go:66] Checking if "multinode-945500" exists ...
	I0416 18:00:13.285961    6988 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500 for IP: 172.19.91.6
	I0416 18:00:13.285961    6988 certs.go:194] generating shared ca certs ...
	I0416 18:00:13.285961    6988 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:00:13.286821    6988 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0416 18:00:13.287059    6988 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0416 18:00:13.287230    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 18:00:13.287572    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0416 18:00:13.287754    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 18:00:13.287938    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 18:00:13.288586    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem (1338 bytes)
	W0416 18:00:13.288985    6988 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324_empty.pem, impossibly tiny 0 bytes
	I0416 18:00:13.289144    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0416 18:00:13.289487    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0416 18:00:13.289775    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0416 18:00:13.290139    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0416 18:00:13.290481    6988 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem (1708 bytes)
	I0416 18:00:13.290481    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /usr/share/ca-certificates/53242.pem
	I0416 18:00:13.291100    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:00:13.291100    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem -> /usr/share/ca-certificates/5324.pem
	I0416 18:00:13.291100    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 18:00:13.340860    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 18:00:13.392323    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 18:00:13.436417    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 18:00:13.477907    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /usr/share/ca-certificates/53242.pem (1708 bytes)
	I0416 18:00:13.525089    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 18:00:13.566780    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem --> /usr/share/ca-certificates/5324.pem (1338 bytes)
	I0416 18:00:13.622111    6988 ssh_runner.go:195] Run: openssl version
	I0416 18:00:13.630969    6988 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0416 18:00:13.644134    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/53242.pem && ln -fs /usr/share/ca-certificates/53242.pem /etc/ssl/certs/53242.pem"
	I0416 18:00:13.673969    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/53242.pem
	I0416 18:00:13.680217    6988 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 16 16:35 /usr/share/ca-certificates/53242.pem
	I0416 18:00:13.680500    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:35 /usr/share/ca-certificates/53242.pem
	I0416 18:00:13.688237    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/53242.pem
	I0416 18:00:13.696922    6988 command_runner.go:130] > 3ec20f2e
	I0416 18:00:13.708831    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/53242.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 18:00:13.733581    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 18:00:13.760217    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:00:13.766741    6988 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 16 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:00:13.767776    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:00:13.776508    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:00:13.784406    6988 command_runner.go:130] > b5213941
	I0416 18:00:13.793775    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 18:00:13.827353    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5324.pem && ln -fs /usr/share/ca-certificates/5324.pem /etc/ssl/certs/5324.pem"
	I0416 18:00:13.855989    6988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5324.pem
	I0416 18:00:13.863594    6988 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 16 16:35 /usr/share/ca-certificates/5324.pem
	I0416 18:00:13.863671    6988 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:35 /usr/share/ca-certificates/5324.pem
	I0416 18:00:13.872713    6988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5324.pem
	I0416 18:00:13.881385    6988 command_runner.go:130] > 51391683
	I0416 18:00:13.891867    6988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5324.pem /etc/ssl/certs/51391683.0"
	I0416 18:00:13.919310    6988 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 18:00:13.925213    6988 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 18:00:13.925213    6988 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 18:00:13.925406    6988 kubeadm.go:928] updating node {m02 172.19.91.6 8443 v1.29.3 docker false true} ...
	I0416 18:00:13.925406    6988 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-945500-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.91.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 18:00:13.933333    6988 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 18:00:13.949475    6988 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	I0416 18:00:13.949595    6988 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0416 18:00:13.961381    6988 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0416 18:00:13.978194    6988 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0416 18:00:13.978194    6988 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256
	I0416 18:00:13.978194    6988 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256
	I0416 18:00:13.978338    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0416 18:00:13.978338    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0416 18:00:13.989548    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 18:00:13.989548    6988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0416 18:00:13.997857    6988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0416 18:00:14.012312    6988 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0416 18:00:14.012312    6988 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0416 18:00:14.012312    6988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0416 18:00:14.012312    6988 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0416 18:00:14.012312    6988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0416 18:00:14.012312    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0416 18:00:14.012312    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0416 18:00:14.024318    6988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0416 18:00:14.111282    6988 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0416 18:00:14.111282    6988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0416 18:00:14.111282    6988 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0416 18:00:15.159706    6988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0416 18:00:15.176637    6988 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0416 18:00:15.206211    6988 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 18:00:15.245325    6988 ssh_runner.go:195] Run: grep 172.19.91.227	control-plane.minikube.internal$ /etc/hosts
	I0416 18:00:15.251624    6988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.91.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 18:00:15.280749    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:00:15.453073    6988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 18:00:15.479748    6988 host.go:66] Checking if "multinode-945500" exists ...
	I0416 18:00:15.480950    6988 start.go:316] joinCluster: &{Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.91.227 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.91.6 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpi
ration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 18:00:15.481069    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0416 18:00:15.481184    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:00:17.505631    6988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:00:17.505631    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:00:17.506531    6988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:00:19.802120    6988 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 18:00:19.802120    6988 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:00:19.802309    6988 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 18:00:19.993353    6988 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token gegaat.x425l3cmfd8uouwr --discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c 
	I0416 18:00:19.993446    6988 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0": (4.5121206s)
	I0416 18:00:19.993446    6988 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.19.91.6 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0416 18:00:19.993532    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gegaat.x425l3cmfd8uouwr --discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-945500-m02"
	I0416 18:00:20.187968    6988 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 18:00:21.976702    6988 command_runner.go:130] > [preflight] Running pre-flight checks
	I0416 18:00:21.976807    6988 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0416 18:00:21.976807    6988 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0416 18:00:21.976877    6988 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 18:00:21.976877    6988 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 18:00:21.976877    6988 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0416 18:00:21.976877    6988 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0416 18:00:21.976877    6988 command_runner.go:130] > This node has joined the cluster:
	I0416 18:00:21.976877    6988 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0416 18:00:21.976946    6988 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0416 18:00:21.976946    6988 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0416 18:00:21.977006    6988 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gegaat.x425l3cmfd8uouwr --discovery-token-ca-cert-hash sha256:1be9fd02076956fb35dbad8eac07ff70ff239674d7302837560d9f19e5c0b48c --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-945500-m02": (1.9833608s)
	I0416 18:00:21.977121    6988 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0416 18:00:22.175327    6988 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0416 18:00:22.347211    6988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-945500-m02 minikube.k8s.io/updated_at=2024_04_16T18_00_22_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4 minikube.k8s.io/name=multinode-945500 minikube.k8s.io/primary=false
	I0416 18:00:22.461008    6988 command_runner.go:130] > node/multinode-945500-m02 labeled
	I0416 18:00:22.461089    6988 start.go:318] duration metric: took 6.9798519s to joinCluster
	I0416 18:00:22.461089    6988 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.19.91.6 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0416 18:00:22.462104    6988 out.go:177] * Verifying Kubernetes components...
	I0416 18:00:22.462104    6988 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:00:22.473344    6988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:00:22.642951    6988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 18:00:22.666251    6988 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 18:00:22.666816    6988 kapi.go:59] client config for multinode-945500: &rest.Config{Host:"https://172.19.91.227:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef16c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 18:00:22.667170    6988 node_ready.go:35] waiting up to 6m0s for node "multinode-945500-m02" to be "Ready" ...
	I0416 18:00:22.667170    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:22.667170    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:22.667170    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:22.667170    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:22.680255    6988 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0416 18:00:22.680255    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:22.680255    6988 round_trippers.go:580]     Audit-Id: 79e76c8e-11df-4387-9f30-9f5f1755a5e0
	I0416 18:00:22.680255    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:22.680255    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:22.680255    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:22.680255    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:22.680255    6988 round_trippers.go:580]     Content-Length: 3925
	I0416 18:00:22.680255    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:22 GMT
	I0416 18:00:22.680255    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"594","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl" [truncated 2901 chars]
	I0416 18:00:23.181369    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:23.181855    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:23.181855    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:23.181855    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:23.186449    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:23.186582    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:23.186582    6988 round_trippers.go:580]     Audit-Id: 4bae6118-587b-4d9b-a922-3970c34bf8ba
	I0416 18:00:23.186582    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:23.186582    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:23.186673    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:23.186717    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:23.186756    6988 round_trippers.go:580]     Content-Length: 3925
	I0416 18:00:23.186756    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:23 GMT
	I0416 18:00:23.186949    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"594","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl" [truncated 2901 chars]
	I0416 18:00:23.677191    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:23.677191    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:23.677317    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:23.677317    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:23.680492    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:23.680492    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:23.680492    6988 round_trippers.go:580]     Audit-Id: a7f57610-9860-47cd-ab38-3f286c67dceb
	I0416 18:00:23.680492    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:23.680492    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:23.680492    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:23.680492    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:23.680492    6988 round_trippers.go:580]     Content-Length: 3925
	I0416 18:00:23.680492    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:23 GMT
	I0416 18:00:23.681055    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"594","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl" [truncated 2901 chars]
	I0416 18:00:24.175480    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:24.175572    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:24.175572    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:24.175572    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:24.179352    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:24.179352    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:24.179352    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:24.179352    6988 round_trippers.go:580]     Content-Length: 3925
	I0416 18:00:24.179352    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:24 GMT
	I0416 18:00:24.179352    6988 round_trippers.go:580]     Audit-Id: aacf48fe-adbc-4413-b29d-2b958ba7f686
	I0416 18:00:24.179352    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:24.179352    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:24.179352    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:24.179613    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"594","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl" [truncated 2901 chars]
	I0416 18:00:24.673856    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:24.673925    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:24.673925    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:24.673925    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:24.676592    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:24.676592    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:24.676592    6988 round_trippers.go:580]     Audit-Id: 000742e0-7f5e-446d-8a61-8bd8bd82aedc
	I0416 18:00:24.676592    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:24.676592    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:24.676592    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:24.676592    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:24.676592    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:24.676592    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:24 GMT
	I0416 18:00:24.677350    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:24.677739    6988 node_ready.go:53] node "multinode-945500-m02" has status "Ready":"False"
	I0416 18:00:25.170259    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:25.170259    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:25.170259    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:25.170259    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:25.173426    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:25.173426    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:25.173426    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:25 GMT
	I0416 18:00:25.173426    6988 round_trippers.go:580]     Audit-Id: f9c1a393-b288-45a4-98d3-52d7af11f587
	I0416 18:00:25.173426    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:25.173426    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:25.173426    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:25.173426    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:25.173426    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:25.173964    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:25.669435    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:25.669435    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:25.669435    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:25.669530    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:25.672183    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:25.672183    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:25.672183    6988 round_trippers.go:580]     Audit-Id: 56bf1cb1-d49e-4031-8ee9-9392bbe1f6c8
	I0416 18:00:25.672183    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:25.672183    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:25.672183    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:25.672183    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:25.672183    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:25.673192    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:25 GMT
	I0416 18:00:25.673265    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:26.181911    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:26.182121    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:26.182121    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:26.182121    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:26.186490    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:26.186490    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:26.186490    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:26.186490    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:26.186490    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:26.186490    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:26.186490    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:26 GMT
	I0416 18:00:26.186580    6988 round_trippers.go:580]     Audit-Id: 88264325-f44e-4d75-8f22-6b8c5c0e9719
	I0416 18:00:26.186580    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:26.186613    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:26.679044    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:26.679044    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:26.679044    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:26.679044    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:26.683356    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:26.683356    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:26.683356    6988 round_trippers.go:580]     Audit-Id: c54e17f7-7d89-4371-9a95-03073ffa0ffb
	I0416 18:00:26.683356    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:26.683356    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:26.683356    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:26.683356    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:26.683356    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:26.683527    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:26 GMT
	I0416 18:00:26.683689    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:26.683980    6988 node_ready.go:53] node "multinode-945500-m02" has status "Ready":"False"
	I0416 18:00:27.180698    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:27.180698    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:27.181090    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:27.181090    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:27.184901    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:27.184901    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:27.184901    6988 round_trippers.go:580]     Audit-Id: b36ab219-082e-454d-8277-5ffcef9ec16b
	I0416 18:00:27.184901    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:27.184901    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:27.184901    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:27.184901    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:27.185540    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:27.185540    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:27 GMT
	I0416 18:00:27.185671    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:27.678872    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:27.678872    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:27.678975    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:27.678975    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:27.682351    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:27.683004    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:27.683054    6988 round_trippers.go:580]     Audit-Id: f599c3f7-7c68-4f15-8953-bfd791eb0198
	I0416 18:00:27.683054    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:27.683054    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:27.683054    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:27.683054    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:27.683054    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:27.683054    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:27 GMT
	I0416 18:00:27.683286    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:28.183860    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:28.183860    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:28.183860    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:28.183860    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:28.186319    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:28.186319    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:28.186319    6988 round_trippers.go:580]     Audit-Id: 872de824-f646-4d43-860c-2165005c98a0
	I0416 18:00:28.186319    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:28.186319    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:28.186319    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:28.186319    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:28.186319    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:28.186319    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:28 GMT
	I0416 18:00:28.187336    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:28.670992    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:28.670992    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:28.670992    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:28.670992    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:28.675123    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:28.675123    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:28.675123    6988 round_trippers.go:580]     Audit-Id: 098493ef-9038-4b08-bf9e-667a6c61491f
	I0416 18:00:28.675123    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:28.675123    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:28.675123    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:28.675123    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:28.675123    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:28.675123    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:28 GMT
	I0416 18:00:28.675123    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:29.174836    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:29.174890    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:29.174945    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:29.174945    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:29.179018    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:29.179018    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:29.179018    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:29 GMT
	I0416 18:00:29.179018    6988 round_trippers.go:580]     Audit-Id: c31ffe7d-9164-4329-85bd-7a52ce9c45ff
	I0416 18:00:29.179018    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:29.179018    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:29.179018    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:29.179018    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:29.179018    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:29.179018    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:29.179706    6988 node_ready.go:53] node "multinode-945500-m02" has status "Ready":"False"
	I0416 18:00:29.677336    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:29.677336    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:29.677336    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:29.677336    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:29.681001    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:29.681227    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:29.681286    6988 round_trippers.go:580]     Audit-Id: 389d232b-c9c8-4769-869a-1c7205097848
	I0416 18:00:29.681330    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:29.681330    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:29.681367    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:29.681367    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:29.681367    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:29.681367    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:29 GMT
	I0416 18:00:29.681367    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:30.179989    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:30.179989    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:30.179989    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:30.179989    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:30.184557    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:30.184557    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:30.184860    6988 round_trippers.go:580]     Audit-Id: 2d0a23fe-1858-420a-8f7d-89a4ab9e2074
	I0416 18:00:30.184860    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:30.184860    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:30.184860    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:30.184860    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:30.184860    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:30.184860    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:30 GMT
	I0416 18:00:30.185147    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:30.678172    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:30.678172    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:30.678172    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:30.678172    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:30.681395    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:30.681395    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:30.681395    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:30.681395    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:30.681395    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:30.681395    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:30.681395    6988 round_trippers.go:580]     Content-Length: 4034
	I0416 18:00:30.681395    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:30 GMT
	I0416 18:00:30.682030    6988 round_trippers.go:580]     Audit-Id: d89d2b5b-078b-40e7-a8de-db37ba442614
	I0416 18:00:30.682245    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"599","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3010 chars]
	I0416 18:00:31.177211    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:31.177533    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:31.177533    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:31.177533    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:31.252985    6988 round_trippers.go:574] Response Status: 200 OK in 75 milliseconds
	I0416 18:00:31.252985    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:31.252985    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:31.252985    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:31.252985    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:31.252985    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:31.252985    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:31 GMT
	I0416 18:00:31.252985    6988 round_trippers.go:580]     Audit-Id: 874c3508-0079-436c-9ee6-4bfd92a9fb2a
	I0416 18:00:31.253576    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:31.253576    6988 node_ready.go:53] node "multinode-945500-m02" has status "Ready":"False"
	I0416 18:00:31.682017    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:31.682017    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:31.682017    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:31.682017    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:31.684916    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:31.685729    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:31.685729    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:31.685729    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:31 GMT
	I0416 18:00:31.685729    6988 round_trippers.go:580]     Audit-Id: d159045d-d37c-4252-bd61-8c73f50b03f8
	I0416 18:00:31.685830    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:31.685830    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:31.685830    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:31.685830    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:32.173658    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:32.173658    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:32.173658    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:32.173658    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:32.177586    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:32.177586    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:32.177586    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:32.177586    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:32 GMT
	I0416 18:00:32.177586    6988 round_trippers.go:580]     Audit-Id: d53ca0a9-698a-4e2e-92c6-bda133162c76
	I0416 18:00:32.177586    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:32.177586    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:32.177586    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:32.178475    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:32.678024    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:32.678024    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:32.678024    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:32.678024    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:32.682085    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:32.682614    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:32.682614    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:32.682614    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:32 GMT
	I0416 18:00:32.682614    6988 round_trippers.go:580]     Audit-Id: 165d0d28-6574-4108-94db-5907ad039dd6
	I0416 18:00:32.682614    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:32.682614    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:32.682684    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:32.682989    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:33.168664    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:33.168922    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:33.168922    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:33.168922    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:33.172390    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:33.172390    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:33.172390    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:33.172390    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:33.172390    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:33.172390    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:33.172390    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:33 GMT
	I0416 18:00:33.172390    6988 round_trippers.go:580]     Audit-Id: ba696923-3f1a-4e11-8165-651eef11660a
	I0416 18:00:33.173411    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:33.676259    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:33.676259    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:33.676259    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:33.676259    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:33.680629    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:33.680629    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:33.680629    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:33.681219    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:33.681219    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:33.681219    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:33 GMT
	I0416 18:00:33.681219    6988 round_trippers.go:580]     Audit-Id: 7be99938-6273-447f-8367-634cd5f0a4de
	I0416 18:00:33.681219    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:33.681531    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:33.682462    6988 node_ready.go:53] node "multinode-945500-m02" has status "Ready":"False"
	I0416 18:00:34.178701    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:34.178701    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:34.178701    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:34.178701    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:34.181286    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:34.181286    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:34.181286    6988 round_trippers.go:580]     Audit-Id: f6019dfe-ab29-48d8-9d01-ee729ec66029
	I0416 18:00:34.181286    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:34.181286    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:34.181286    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:34.181286    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:34.181286    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:34 GMT
	I0416 18:00:34.181975    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:34.669380    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:34.669668    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:34.669668    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:34.669668    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:34.672465    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:34.672465    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:34.672465    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:34.672465    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:34.672465    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:34.672465    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:34 GMT
	I0416 18:00:34.672465    6988 round_trippers.go:580]     Audit-Id: a8719766-b414-4604-94c0-e20be6a01464
	I0416 18:00:34.672465    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:34.673674    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:35.169393    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:35.169618    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:35.169692    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:35.169692    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:35.174028    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:35.174028    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:35.174209    6988 round_trippers.go:580]     Audit-Id: ea553a57-8167-487c-a417-8cf0ded53743
	I0416 18:00:35.174209    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:35.174209    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:35.174209    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:35.174209    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:35.174209    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:35 GMT
	I0416 18:00:35.174511    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:35.682247    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:35.682650    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:35.682650    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:35.682650    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:35.685938    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:35.685938    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:35.685938    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:35.685938    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:35.685938    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:35.685938    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:35.685938    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:35 GMT
	I0416 18:00:35.685938    6988 round_trippers.go:580]     Audit-Id: 82dc03b1-e6f8-433d-ac2b-277fc69a2b99
	I0416 18:00:35.686923    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:35.687544    6988 node_ready.go:53] node "multinode-945500-m02" has status "Ready":"False"
	I0416 18:00:36.182291    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:36.182393    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:36.182393    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:36.182442    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:36.190024    6988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0416 18:00:36.190024    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:36.190024    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:36.190024    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:36.190024    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:36.190024    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:36.190024    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:36 GMT
	I0416 18:00:36.190024    6988 round_trippers.go:580]     Audit-Id: a48a8529-ba4d-49a4-90a4-d4a77c7c5001
	I0416 18:00:36.190657    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:36.677065    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:36.677162    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:36.677162    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:36.677162    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:36.680646    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:36.680646    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:36.680646    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:36.680646    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:36.680646    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:36 GMT
	I0416 18:00:36.681185    6988 round_trippers.go:580]     Audit-Id: e4e94e54-d688-4263-a0ef-d154f5f4abeb
	I0416 18:00:36.681185    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:36.681185    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:36.681442    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:37.174195    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:37.174195    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:37.174634    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:37.174634    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:37.178029    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:37.178029    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:37.178830    6988 round_trippers.go:580]     Audit-Id: 55aa8476-6f9d-4256-9569-30e89b1a496b
	I0416 18:00:37.178830    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:37.178830    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:37.178830    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:37.178830    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:37.178830    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:37 GMT
	I0416 18:00:37.179087    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:37.673081    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:37.673348    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:37.673425    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:37.673425    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:37.677095    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:37.677095    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:37.677095    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:37.677095    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:37.677193    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:37.677193    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:37 GMT
	I0416 18:00:37.677193    6988 round_trippers.go:580]     Audit-Id: f84a1c1a-51f5-4ca5-aedb-2f21bb70141f
	I0416 18:00:37.677193    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:37.677583    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:38.171025    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:38.171133    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:38.171133    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:38.171133    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:38.174956    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:38.174956    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:38.174956    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:38.174956    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:38.175478    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:38 GMT
	I0416 18:00:38.175478    6988 round_trippers.go:580]     Audit-Id: ad79e752-a790-4167-88de-0fa0a1ce2c7f
	I0416 18:00:38.175478    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:38.175478    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:38.175685    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:38.176345    6988 node_ready.go:53] node "multinode-945500-m02" has status "Ready":"False"
	I0416 18:00:38.682781    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:38.682781    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:38.682781    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:38.682875    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:38.687443    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:38.687443    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:38.687443    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:38 GMT
	I0416 18:00:38.687443    6988 round_trippers.go:580]     Audit-Id: 9f833ee4-3fc1-4823-99f9-056bf39a2137
	I0416 18:00:38.687443    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:38.687443    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:38.687443    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:38.687443    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:38.687880    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:39.181718    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:39.181718    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:39.181718    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:39.181718    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:39.185234    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:39.185234    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:39.185234    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:39 GMT
	I0416 18:00:39.185234    6988 round_trippers.go:580]     Audit-Id: c944df6e-2f72-4b2f-84ed-0ef01d4bf4ad
	I0416 18:00:39.185234    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:39.185234    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:39.185234    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:39.185234    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:39.186227    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:39.679471    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:39.679471    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:39.679471    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:39.679471    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:39.683435    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:39.683435    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:39.683435    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:39.683435    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:39.683435    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:39.683435    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:39 GMT
	I0416 18:00:39.683435    6988 round_trippers.go:580]     Audit-Id: 72ce3907-afe5-4673-a364-1b0ade9a63a2
	I0416 18:00:39.683435    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:39.684439    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:40.179709    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:40.179709    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:40.179709    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:40.179709    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:40.182280    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:40.182280    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:40.182280    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:40.182280    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:40.182280    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:40.182280    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:40 GMT
	I0416 18:00:40.182280    6988 round_trippers.go:580]     Audit-Id: 15242798-963e-4292-8f78-c57c95f730a6
	I0416 18:00:40.182280    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:40.183037    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:40.183378    6988 node_ready.go:53] node "multinode-945500-m02" has status "Ready":"False"
	I0416 18:00:40.679352    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:40.679436    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:40.679436    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:40.679436    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:40.682752    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:40.682752    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:40.682752    6988 round_trippers.go:580]     Audit-Id: e11e0806-566d-477a-bcb8-8829648fc79a
	I0416 18:00:40.682752    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:40.682752    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:40.682752    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:40.682752    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:40.682752    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:40 GMT
	I0416 18:00:40.683363    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"608","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3402 chars]
	I0416 18:00:41.181519    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:41.181623    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.181623    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.181623    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.184563    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.184563    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.184563    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.184563    6988 round_trippers.go:580]     Audit-Id: 8c5f2f81-67e0-45b9-81aa-b9f9cb72a322
	I0416 18:00:41.184563    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.185366    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.185366    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.185366    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.185630    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"630","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3268 chars]
	I0416 18:00:41.186155    6988 node_ready.go:49] node "multinode-945500-m02" has status "Ready":"True"
	I0416 18:00:41.186155    6988 node_ready.go:38] duration metric: took 18.5179332s for node "multinode-945500-m02" to be "Ready" ...
	I0416 18:00:41.186235    6988 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 18:00:41.186380    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods
	I0416 18:00:41.186380    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.186380    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.186461    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.190907    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:41.191511    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.191511    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.191511    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.191511    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.191511    6988 round_trippers.go:580]     Audit-Id: 5b40846d-502b-40b4-b4e6-b0c0c199dcda
	I0416 18:00:41.191511    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.191511    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.194735    6988 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"630"},"items":[{"metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"441","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 70406 chars]
	I0416 18:00:41.197721    6988 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-86z7h" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.197721    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 18:00:41.197721    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.197721    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.197721    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.200304    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.201307    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.201307    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.201307    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.201307    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.201307    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.201307    6988 round_trippers.go:580]     Audit-Id: ddd585b2-d4a5-4fc9-9e78-3d162e0d75cf
	I0416 18:00:41.201307    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.201671    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"441","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0416 18:00:41.202254    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 18:00:41.202254    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.202254    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.202254    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.204830    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.204830    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.204830    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.204830    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.204830    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.204830    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.204830    6988 round_trippers.go:580]     Audit-Id: 5615a17f-6d55-4784-b914-b1262342e4ef
	I0416 18:00:41.204830    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.205530    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"452","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4966 chars]
	I0416 18:00:41.206190    6988 pod_ready.go:92] pod "coredns-76f75df574-86z7h" in "kube-system" namespace has status "Ready":"True"
	I0416 18:00:41.206190    6988 pod_ready.go:81] duration metric: took 8.4686ms for pod "coredns-76f75df574-86z7h" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.206190    6988 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.206190    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-945500
	I0416 18:00:41.206190    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.206190    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.206190    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.208799    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.208799    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.208799    6988 round_trippers.go:580]     Audit-Id: ae8a0c71-2dd6-45b7-96d9-80a7e15fec82
	I0416 18:00:41.208799    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.208799    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.208799    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.208799    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.208799    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.209788    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-945500","namespace":"kube-system","uid":"245cef70-3506-471b-9bf6-dd14a2c23d8c","resourceVersion":"372","creationTimestamp":"2024-04-16T17:57:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.91.227:2379","kubernetes.io/config.hash":"c735a32dacf9631b2b4787fe99cff316","kubernetes.io/config.mirror":"c735a32dacf9631b2b4787fe99cff316","kubernetes.io/config.seen":"2024-04-16T17:57:28.101466445Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0416 18:00:41.209825    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 18:00:41.209825    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.209825    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.209825    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.211989    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.211989    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.211989    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.211989    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.211989    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.211989    6988 round_trippers.go:580]     Audit-Id: 0c5d029c-085b-4f7e-a116-d1258a75da93
	I0416 18:00:41.211989    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.211989    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.213223    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"452","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4966 chars]
	I0416 18:00:41.213811    6988 pod_ready.go:92] pod "etcd-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 18:00:41.213811    6988 pod_ready.go:81] duration metric: took 7.62ms for pod "etcd-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.213811    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.213811    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-945500
	I0416 18:00:41.213811    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.213811    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.213811    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.216448    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.216448    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.216448    6988 round_trippers.go:580]     Audit-Id: 6b2d211f-a673-4f75-931c-2de9b00a2806
	I0416 18:00:41.216448    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.216448    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.216448    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.216448    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.216448    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.217191    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-945500","namespace":"kube-system","uid":"c6ae969a-de5d-4c7e-af09-b1a5eb21f2ab","resourceVersion":"314","creationTimestamp":"2024-04-16T17:57:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.91.227:8443","kubernetes.io/config.hash":"564fae5a3e5851c815d6092b123a5395","kubernetes.io/config.mirror":"564fae5a3e5851c815d6092b123a5395","kubernetes.io/config.seen":"2024-04-16T17:57:28.101471746Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0416 18:00:41.217191    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 18:00:41.217778    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.217778    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.217778    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.219971    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.219971    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.219971    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.219971    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.219971    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.219971    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.219971    6988 round_trippers.go:580]     Audit-Id: 97c48e0c-3227-4fdb-bb53-2c5b0a99e16e
	I0416 18:00:41.219971    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.220674    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"452","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4966 chars]
	I0416 18:00:41.220674    6988 pod_ready.go:92] pod "kube-apiserver-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 18:00:41.220674    6988 pod_ready.go:81] duration metric: took 6.8627ms for pod "kube-apiserver-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.220674    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.220674    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-945500
	I0416 18:00:41.221243    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.221243    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.221243    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.223295    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.223295    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.224145    6988 round_trippers.go:580]     Audit-Id: 5ff785c8-f305-4111-b54a-6d01717ce756
	I0416 18:00:41.224182    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.224223    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.224223    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.224223    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.224315    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.224478    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-945500","namespace":"kube-system","uid":"01b937c2-9827-4240-83f0-3536fec5eb5e","resourceVersion":"345","creationTimestamp":"2024-04-16T17:57:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5db71de2029227779432bddd337fc81d","kubernetes.io/config.mirror":"5db71de2029227779432bddd337fc81d","kubernetes.io/config.seen":"2024-04-16T17:57:28.101473146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0416 18:00:41.225131    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 18:00:41.225131    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.225131    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.225131    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.231431    6988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 18:00:41.231431    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.231431    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.231431    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.231431    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.231431    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.231431    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.231431    6988 round_trippers.go:580]     Audit-Id: d45b4d6a-ea94-4484-87ef-fd18b35ed725
	I0416 18:00:41.231431    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"452","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4966 chars]
	I0416 18:00:41.232071    6988 pod_ready.go:92] pod "kube-controller-manager-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 18:00:41.232071    6988 pod_ready.go:81] duration metric: took 11.3966ms for pod "kube-controller-manager-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.232071    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q5bdr" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.382236    6988 request.go:629] Waited for 150.1565ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q5bdr
	I0416 18:00:41.382407    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q5bdr
	I0416 18:00:41.382407    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.382407    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.382407    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.385083    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:41.385083    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.385083    6988 round_trippers.go:580]     Audit-Id: b4d8ec79-02a6-45ad-9ecc-b7b22761dffb
	I0416 18:00:41.385083    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.385083    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.385083    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.385083    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.385083    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.385507    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q5bdr","generateName":"kube-proxy-","namespace":"kube-system","uid":"18f90e3f-dd52-44a3-918a-66181a779f58","resourceVersion":"614","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"83f1bde2-7175-4a0f-944e-61200d7e7177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83f1bde2-7175-4a0f-944e-61200d7e7177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5826 chars]
	I0416 18:00:41.585818    6988 request.go:629] Waited for 199.7761ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:41.585818    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:00:41.585818    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.586164    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.586164    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.590196    6988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:00:41.590196    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.590196    6988 round_trippers.go:580]     Audit-Id: 1d479fce-49d7-483b-a6cd-e9bad5ef24c8
	I0416 18:00:41.590196    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.590196    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.590196    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.590196    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.590196    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.590196    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"630","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3268 chars]
	I0416 18:00:41.590835    6988 pod_ready.go:92] pod "kube-proxy-q5bdr" in "kube-system" namespace has status "Ready":"True"
	I0416 18:00:41.590835    6988 pod_ready.go:81] duration metric: took 358.7431ms for pod "kube-proxy-q5bdr" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.590835    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rfxsg" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.787070    6988 request.go:629] Waited for 196.0845ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfxsg
	I0416 18:00:41.787761    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfxsg
	I0416 18:00:41.787761    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.787761    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.787761    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.791225    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:41.791225    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.791225    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.791225    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.791225    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.791225    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:41 GMT
	I0416 18:00:41.791225    6988 round_trippers.go:580]     Audit-Id: 0948013e-ea2e-4863-bd44-98088c0ba200
	I0416 18:00:41.791225    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.792789    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rfxsg","generateName":"kube-proxy-","namespace":"kube-system","uid":"b740e6e0-4768-4dd4-a958-307662a92578","resourceVersion":"401","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"83f1bde2-7175-4a0f-944e-61200d7e7177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83f1bde2-7175-4a0f-944e-61200d7e7177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0416 18:00:41.990002    6988 request.go:629] Waited for 196.614ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 18:00:41.990240    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 18:00:41.990240    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:41.990240    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:41.990240    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:41.993828    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:41.993828    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:41.993828    6988 round_trippers.go:580]     Audit-Id: 604aaeac-f05a-47b3-96f5-af81155d3173
	I0416 18:00:41.993828    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:41.993828    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:41.993828    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:41.993828    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:41.993828    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:42 GMT
	I0416 18:00:41.994260    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"452","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4966 chars]
	I0416 18:00:41.994754    6988 pod_ready.go:92] pod "kube-proxy-rfxsg" in "kube-system" namespace has status "Ready":"True"
	I0416 18:00:41.994817    6988 pod_ready.go:81] duration metric: took 403.9592ms for pod "kube-proxy-rfxsg" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:41.994817    6988 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:42.191736    6988 request.go:629] Waited for 196.6039ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-945500
	I0416 18:00:42.191828    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-945500
	I0416 18:00:42.191933    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:42.191933    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:42.191933    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:42.194567    6988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:00:42.194567    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:42.194567    6988 round_trippers.go:580]     Audit-Id: 6ab76f79-405f-48f9-ad04-90e78aa34737
	I0416 18:00:42.194567    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:42.194567    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:42.194567    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:42.194567    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:42.195203    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:42 GMT
	I0416 18:00:42.195382    6988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-945500","namespace":"kube-system","uid":"a09e52e8-1ac2-4c22-8a3d-57969fae85a9","resourceVersion":"310","creationTimestamp":"2024-04-16T17:57:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4ebc73a23d79d1dece7469fd94c931d1","kubernetes.io/config.mirror":"4ebc73a23d79d1dece7469fd94c931d1","kubernetes.io/config.seen":"2024-04-16T17:57:20.694761708Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0416 18:00:42.393042    6988 request.go:629] Waited for 196.8309ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 18:00:42.393350    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes/multinode-945500
	I0416 18:00:42.393350    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:42.393434    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:42.393434    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:42.396719    6988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:00:42.397078    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:42.397078    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:42.397078    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:42.397078    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:42 GMT
	I0416 18:00:42.397078    6988 round_trippers.go:580]     Audit-Id: ff7a49f1-7963-4872-babf-4857b06f6961
	I0416 18:00:42.397078    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:42.397078    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:42.397705    6988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"452","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Fields [truncated 4966 chars]
	I0416 18:00:42.397705    6988 pod_ready.go:92] pod "kube-scheduler-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 18:00:42.397705    6988 pod_ready.go:81] duration metric: took 402.8649ms for pod "kube-scheduler-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:00:42.397705    6988 pod_ready.go:38] duration metric: took 1.2114007s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 18:00:42.398226    6988 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 18:00:42.407057    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 18:00:42.430019    6988 system_svc.go:56] duration metric: took 31.7913ms WaitForService to wait for kubelet
	I0416 18:00:42.430019    6988 kubeadm.go:576] duration metric: took 19.9677952s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 18:00:42.430019    6988 node_conditions.go:102] verifying NodePressure condition ...
	I0416 18:00:42.594801    6988 request.go:629] Waited for 164.4742ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.91.227:8443/api/v1/nodes
	I0416 18:00:42.595048    6988 round_trippers.go:463] GET https://172.19.91.227:8443/api/v1/nodes
	I0416 18:00:42.595048    6988 round_trippers.go:469] Request Headers:
	I0416 18:00:42.595156    6988 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:00:42.595156    6988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:00:42.600192    6988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 18:00:42.600192    6988 round_trippers.go:577] Response Headers:
	I0416 18:00:42.600192    6988 round_trippers.go:580]     Audit-Id: 7201947e-da4a-45b2-acc1-266f83b267ad
	I0416 18:00:42.600296    6988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:00:42.600296    6988 round_trippers.go:580]     Content-Type: application/json
	I0416 18:00:42.600296    6988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:00:42.600296    6988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:00:42.600296    6988 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:00:42 GMT
	I0416 18:00:42.600799    6988 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"633"},"items":[{"metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"452","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"mana
gedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1" [truncated 9279 chars]
	I0416 18:00:42.601645    6988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 18:00:42.601726    6988 node_conditions.go:123] node cpu capacity is 2
	I0416 18:00:42.601726    6988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 18:00:42.601726    6988 node_conditions.go:123] node cpu capacity is 2
	I0416 18:00:42.601726    6988 node_conditions.go:105] duration metric: took 171.6974ms to run NodePressure ...
	I0416 18:00:42.601799    6988 start.go:240] waiting for startup goroutines ...
	I0416 18:00:42.601887    6988 start.go:254] writing updated cluster config ...
	I0416 18:00:42.611423    6988 ssh_runner.go:195] Run: rm -f paused
	I0416 18:00:42.727143    6988 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0416 18:00:42.728491    6988 out.go:177] * Done! kubectl is now configured to use "multinode-945500" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 16 18:01:06 multinode-945500 dockerd[1329]: time="2024-04-16T18:01:06.790007462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:01:06 multinode-945500 dockerd[1329]: time="2024-04-16T18:01:06.790158272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:01:06 multinode-945500 dockerd[1329]: time="2024-04-16T18:01:06.790278279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:01:06 multinode-945500 dockerd[1329]: time="2024-04-16T18:01:06.790482592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:01:52 multinode-945500 dockerd[1323]: 2024/04/16 18:01:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:01:52 multinode-945500 dockerd[1323]: 2024/04/16 18:01:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:01:52 multinode-945500 dockerd[1323]: 2024/04/16 18:01:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:01:52 multinode-945500 dockerd[1323]: 2024/04/16 18:01:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:01:52 multinode-945500 dockerd[1323]: 2024/04/16 18:01:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:01:52 multinode-945500 dockerd[1323]: 2024/04/16 18:01:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:01:52 multinode-945500 dockerd[1323]: 2024/04/16 18:01:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:05:43 multinode-945500 dockerd[1323]: 2024/04/16 18:05:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:05:43 multinode-945500 dockerd[1323]: 2024/04/16 18:05:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:05:43 multinode-945500 dockerd[1323]: 2024/04/16 18:05:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:05:44 multinode-945500 dockerd[1323]: 2024/04/16 18:05:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:05:44 multinode-945500 dockerd[1323]: 2024/04/16 18:05:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:05:44 multinode-945500 dockerd[1323]: 2024/04/16 18:05:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:05:44 multinode-945500 dockerd[1323]: 2024/04/16 18:05:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:06:55 multinode-945500 dockerd[1323]: 2024/04/16 18:06:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:06:55 multinode-945500 dockerd[1323]: 2024/04/16 18:06:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:06:55 multinode-945500 dockerd[1323]: 2024/04/16 18:06:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:06:55 multinode-945500 dockerd[1323]: 2024/04/16 18:06:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:06:55 multinode-945500 dockerd[1323]: 2024/04/16 18:06:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:06:55 multinode-945500 dockerd[1323]: 2024/04/16 18:06:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 18:06:55 multinode-945500 dockerd[1323]: 2024/04/16 18:06:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1475366123af9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   11 minutes ago      Running             busybox                   0                   c72a50cfb5bde       busybox-7fdf7869d9-jxvx2
	6ad0b1d75a1e3       cbb01a7bd410d                                                                                         14 minutes ago      Running             coredns                   0                   2ba60ece6840a       coredns-76f75df574-86z7h
	2b470472d009f       6e38f40d628db                                                                                         14 minutes ago      Running             storage-provisioner       0                   6f233a9704eee       storage-provisioner
	cd37920f1d544       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              14 minutes ago      Running             kindnet-cni               0                   d2cd68d7f406d       kindnet-tp7jl
	f56880607ce1e       a1d263b5dc5b0                                                                                         14 minutes ago      Running             kube-proxy                0                   68766d2b671ff       kube-proxy-rfxsg
	736259e5d03b5       39f995c9f1996                                                                                         15 minutes ago      Running             kube-apiserver            0                   b8699d93388d0       kube-apiserver-multinode-945500
	4a7c8d9808b66       8c390d98f50c0                                                                                         15 minutes ago      Running             kube-scheduler            0                   ecb0ceb1a3fed       kube-scheduler-multinode-945500
	91288754cb0bd       6052a25da3f97                                                                                         15 minutes ago      Running             kube-controller-manager   0                   d28c611e06055       kube-controller-manager-multinode-945500
	0cae708a3787a       3861cfcd7c04c                                                                                         15 minutes ago      Running             etcd                      0                   5f7e5b16341d1       etcd-multinode-945500
	
	
	==> coredns [6ad0b1d75a1e] <==
	[INFO] 10.244.0.3:47642 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000140809s
	[INFO] 10.244.1.2:38063 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000393824s
	[INFO] 10.244.1.2:53430 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000153309s
	[INFO] 10.244.1.2:47690 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181411s
	[INFO] 10.244.1.2:40309 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145609s
	[INFO] 10.244.1.2:60258 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000052603s
	[INFO] 10.244.1.2:43597 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000068204s
	[INFO] 10.244.1.2:53767 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061503s
	[INFO] 10.244.1.2:54777 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056603s
	[INFO] 10.244.0.3:38964 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184311s
	[INFO] 10.244.0.3:53114 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074805s
	[INFO] 10.244.0.3:36074 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062204s
	[INFO] 10.244.0.3:60668 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090906s
	[INFO] 10.244.1.2:54659 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099206s
	[INFO] 10.244.1.2:41929 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080505s
	[INFO] 10.244.1.2:40931 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059704s
	[INFO] 10.244.1.2:48577 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000058804s
	[INFO] 10.244.0.3:33415 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000283317s
	[INFO] 10.244.0.3:52256 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000109407s
	[INFO] 10.244.0.3:34542 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000222014s
	[INFO] 10.244.0.3:59509 - 5 "PTR IN 1.80.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000278017s
	[INFO] 10.244.1.2:34647 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164509s
	[INFO] 10.244.1.2:44123 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000155309s
	[INFO] 10.244.1.2:47985 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000056403s
	[INFO] 10.244.1.2:38781 - 5 "PTR IN 1.80.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000051303s
	
	
	==> describe nodes <==
	Name:               multinode-945500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-945500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=multinode-945500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T17_57_28_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 17:57:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-945500
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 18:12:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 18:11:45 +0000   Tue, 16 Apr 2024 17:57:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 18:11:45 +0000   Tue, 16 Apr 2024 17:57:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 18:11:45 +0000   Tue, 16 Apr 2024 17:57:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 18:11:45 +0000   Tue, 16 Apr 2024 17:57:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.91.227
	  Hostname:    multinode-945500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 e85d34dd6c5848b4a3ec498b43e70cda
	  System UUID:                f07a2411-3a9a-ca4a-afc3-5ddc78eea33d
	  Boot ID:                    271a6251-2183-4573-9d3f-923b343cbbd7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-jxvx2                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-76f75df574-86z7h                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-multinode-945500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-tp7jl                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-multinode-945500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-multinode-945500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-rfxsg                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-multinode-945500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node multinode-945500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node multinode-945500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node multinode-945500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node multinode-945500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node multinode-945500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node multinode-945500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                node-controller  Node multinode-945500 event: Registered Node multinode-945500 in Controller
	  Normal  NodeReady                14m                kubelet          Node multinode-945500 status is now: NodeReady
	
	
	Name:               multinode-945500-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-945500-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=multinode-945500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_16T18_00_22_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 18:00:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-945500-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 18:12:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 18:11:34 +0000   Tue, 16 Apr 2024 18:00:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 18:11:34 +0000   Tue, 16 Apr 2024 18:00:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 18:11:34 +0000   Tue, 16 Apr 2024 18:00:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 18:11:34 +0000   Tue, 16 Apr 2024 18:00:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.91.6
	  Hostname:    multinode-945500-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6ffb3ffe1886460d8f31c8166436085f
	  System UUID:                cd85b681-7c9d-6842-b820-50fe53a2fe10
	  Boot ID:                    391147f8-cd3e-46f1-9b23-dd3a04f0f3a0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-ns8nx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-7pg6g               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-q5bdr            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x2 over 12m)  kubelet          Node multinode-945500-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x2 over 12m)  kubelet          Node multinode-945500-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x2 over 12m)  kubelet          Node multinode-945500-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                node-controller  Node multinode-945500-m02 event: Registered Node multinode-945500-m02 in Controller
	  Normal  NodeReady                11m                kubelet          Node multinode-945500-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr16 17:56] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.180108] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[ +28.712788] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.080808] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.453937] systemd-fstab-generator[983]: Ignoring "noauto" option for root device
	[  +0.161653] systemd-fstab-generator[996]: Ignoring "noauto" option for root device
	[  +0.200737] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	[  +2.669121] systemd-fstab-generator[1182]: Ignoring "noauto" option for root device
	[  +0.171244] systemd-fstab-generator[1194]: Ignoring "noauto" option for root device
	[  +0.164230] systemd-fstab-generator[1206]: Ignoring "noauto" option for root device
	[  +0.237653] systemd-fstab-generator[1221]: Ignoring "noauto" option for root device
	[Apr16 17:57] systemd-fstab-generator[1315]: Ignoring "noauto" option for root device
	[  +0.100359] kauditd_printk_skb: 205 callbacks suppressed
	[  +2.927133] systemd-fstab-generator[1513]: Ignoring "noauto" option for root device
	[  +5.699753] systemd-fstab-generator[1707]: Ignoring "noauto" option for root device
	[  +0.085837] kauditd_printk_skb: 73 callbacks suppressed
	[  +7.760431] systemd-fstab-generator[2107]: Ignoring "noauto" option for root device
	[  +0.135160] kauditd_printk_skb: 62 callbacks suppressed
	[  +7.450297] hrtimer: interrupt took 987259 ns
	[  +5.262610] systemd-fstab-generator[2292]: Ignoring "noauto" option for root device
	[  +0.195654] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.560394] kauditd_printk_skb: 51 callbacks suppressed
	[Apr16 18:01] kauditd_printk_skb: 23 callbacks suppressed
	
	
	==> etcd [0cae708a3787] <==
	{"level":"info","ts":"2024-04-16T17:57:22.037796Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.19.91.227:2380"}
	{"level":"info","ts":"2024-04-16T17:57:22.485441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-16T17:57:22.485773Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-16T17:57:22.486062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 received MsgPreVoteResp from e902f456ac8a37b6 at term 1"}
	{"level":"info","ts":"2024-04-16T17:57:22.486206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 became candidate at term 2"}
	{"level":"info","ts":"2024-04-16T17:57:22.486317Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 received MsgVoteResp from e902f456ac8a37b6 at term 2"}
	{"level":"info","ts":"2024-04-16T17:57:22.486459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 became leader at term 2"}
	{"level":"info","ts":"2024-04-16T17:57:22.486613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e902f456ac8a37b6 elected leader e902f456ac8a37b6 at term 2"}
	{"level":"info","ts":"2024-04-16T17:57:22.492605Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e902f456ac8a37b6","local-member-attributes":"{Name:multinode-945500 ClientURLs:[https://172.19.91.227:2379]}","request-path":"/0/members/e902f456ac8a37b6/attributes","cluster-id":"ba3fb579e58fbd76","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-16T17:57:22.493027Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T17:57:22.493291Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-16T17:57:22.495438Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-16T17:57:22.493174Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T17:57:22.501637Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-16T17:57:22.494099Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T17:57:22.508993Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.91.227:2379"}
	{"level":"info","ts":"2024-04-16T17:57:22.537458Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ba3fb579e58fbd76","local-member-id":"e902f456ac8a37b6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T17:57:22.537767Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T17:57:22.540447Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T18:07:22.633427Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":757}
	{"level":"info","ts":"2024-04-16T18:07:22.641746Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":757,"took":"7.646828ms","hash":2229679416,"current-db-size-bytes":2338816,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2338816,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-04-16T18:07:22.641863Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2229679416,"revision":757,"compact-revision":-1}
	{"level":"info","ts":"2024-04-16T18:12:22.644262Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1028}
	{"level":"info","ts":"2024-04-16T18:12:22.648699Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1028,"took":"3.694147ms","hash":804683808,"current-db-size-bytes":2338816,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1609728,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-04-16T18:12:22.648809Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":804683808,"revision":1028,"compact-revision":757}
	
	
	==> kernel <==
	 18:12:28 up 17 min,  0 users,  load average: 0.13, 0.14, 0.14
	Linux multinode-945500 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [cd37920f1d54] <==
	I0416 18:11:18.997531       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:11:29.010282       1 main.go:223] Handling node with IPs: map[172.19.91.227:{}]
	I0416 18:11:29.010321       1 main.go:227] handling current node
	I0416 18:11:29.010332       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:11:29.010338       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:11:39.015988       1 main.go:223] Handling node with IPs: map[172.19.91.227:{}]
	I0416 18:11:39.016091       1 main.go:227] handling current node
	I0416 18:11:39.016106       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:11:39.016113       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:11:49.021670       1 main.go:223] Handling node with IPs: map[172.19.91.227:{}]
	I0416 18:11:49.021711       1 main.go:227] handling current node
	I0416 18:11:49.021722       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:11:49.021728       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:11:59.033228       1 main.go:223] Handling node with IPs: map[172.19.91.227:{}]
	I0416 18:11:59.033267       1 main.go:227] handling current node
	I0416 18:11:59.033278       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:11:59.033285       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:12:09.039007       1 main.go:223] Handling node with IPs: map[172.19.91.227:{}]
	I0416 18:12:09.039107       1 main.go:227] handling current node
	I0416 18:12:09.039119       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:12:09.039126       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:12:19.051437       1 main.go:223] Handling node with IPs: map[172.19.91.227:{}]
	I0416 18:12:19.051475       1 main.go:227] handling current node
	I0416 18:12:19.051486       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:12:19.051493       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [736259e5d03b] <==
	I0416 17:57:24.492548       1 shared_informer.go:318] Caches are synced for configmaps
	I0416 17:57:24.493015       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0416 17:57:24.493164       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0416 17:57:24.493567       1 aggregator.go:165] initial CRD sync complete...
	I0416 17:57:24.493754       1 autoregister_controller.go:141] Starting autoregister controller
	I0416 17:57:24.493855       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0416 17:57:24.493948       1 cache.go:39] Caches are synced for autoregister controller
	I0416 17:57:24.498835       1 controller.go:624] quota admission added evaluator for: namespaces
	I0416 17:57:24.572544       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0416 17:57:24.581941       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0416 17:57:25.383934       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0416 17:57:25.391363       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0416 17:57:25.391584       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0416 17:57:26.186472       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0416 17:57:26.241100       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0416 17:57:26.380286       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0416 17:57:26.389156       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.91.227]
	I0416 17:57:26.390446       1 controller.go:624] quota admission added evaluator for: endpoints
	I0416 17:57:26.395894       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0416 17:57:26.463024       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0416 17:57:27.978875       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0416 17:57:27.996061       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0416 17:57:28.010130       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0416 17:57:40.322187       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0416 17:57:40.406944       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [91288754cb0b] <==
	I0416 17:57:41.176487       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="38.505µs"
	I0416 17:57:50.419156       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="87.708µs"
	I0416 17:57:50.439046       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="77.007µs"
	I0416 17:57:52.289724       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="340.797µs"
	I0416 17:57:52.327958       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="8.879815ms"
	I0416 17:57:52.329283       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="77.899µs"
	I0416 17:57:54.522679       1 node_lifecycle_controller.go:1045] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0416 18:00:21.143291       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-945500-m02\" does not exist"
	I0416 18:00:21.160886       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7pg6g"
	I0416 18:00:21.165863       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-q5bdr"
	I0416 18:00:21.190337       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-945500-m02" podCIDRs=["10.244.1.0/24"]
	I0416 18:00:24.552622       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-945500-m02"
	I0416 18:00:24.552697       1 event.go:376] "Event occurred" object="multinode-945500-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-945500-m02 event: Registered Node multinode-945500-m02 in Controller"
	I0416 18:00:41.273225       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-945500-m02"
	I0416 18:01:05.000162       1 event.go:376] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7fdf7869d9 to 2"
	I0416 18:01:05.018037       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-ns8nx"
	I0416 18:01:05.041877       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-jxvx2"
	I0416 18:01:05.061957       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="58.524499ms"
	I0416 18:01:05.079880       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="17.398354ms"
	I0416 18:01:05.080339       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="67.502µs"
	I0416 18:01:05.093042       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="55.802µs"
	I0416 18:01:07.013162       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="7.557663ms"
	I0416 18:01:07.014558       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="1.14747ms"
	I0416 18:01:07.433900       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="7.930386ms"
	I0416 18:01:07.434257       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="37.403µs"
	
	
	==> kube-proxy [f56880607ce1] <==
	I0416 17:57:41.776688       1 server_others.go:72] "Using iptables proxy"
	I0416 17:57:41.792626       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.91.227"]
	I0416 17:57:41.867257       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 17:57:41.867331       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 17:57:41.867350       1 server_others.go:168] "Using iptables Proxier"
	I0416 17:57:41.871330       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 17:57:41.872230       1 server.go:865] "Version info" version="v1.29.3"
	I0416 17:57:41.872370       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 17:57:41.874113       1 config.go:188] "Starting service config controller"
	I0416 17:57:41.874135       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 17:57:41.874160       1 config.go:97] "Starting endpoint slice config controller"
	I0416 17:57:41.874165       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 17:57:41.876871       1 config.go:315] "Starting node config controller"
	I0416 17:57:41.876896       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 17:57:41.974693       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 17:57:41.974749       1 shared_informer.go:318] Caches are synced for service config
	I0416 17:57:41.977426       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [4a7c8d9808b6] <==
	W0416 17:57:25.449324       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0416 17:57:25.449598       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0416 17:57:25.655533       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 17:57:25.656479       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 17:57:25.692827       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0416 17:57:25.693097       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0416 17:57:25.711042       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0416 17:57:25.711136       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0416 17:57:25.720155       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 17:57:25.720353       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 17:57:25.721550       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 17:57:25.721738       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 17:57:25.738855       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 17:57:25.738995       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0416 17:57:25.765058       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0416 17:57:25.765096       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0416 17:57:25.774340       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0416 17:57:25.774569       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0416 17:57:25.791990       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0416 17:57:25.792031       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0416 17:57:25.929298       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 17:57:25.929342       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 17:57:26.119349       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 17:57:26.119818       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0416 17:57:29.235915       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 18:08:28 multinode-945500 kubelet[2114]: E0416 18:08:28.261000    2114 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 18:08:28 multinode-945500 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 18:08:28 multinode-945500 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 18:08:28 multinode-945500 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 18:08:28 multinode-945500 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 18:09:28 multinode-945500 kubelet[2114]: E0416 18:09:28.261953    2114 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 18:09:28 multinode-945500 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 18:09:28 multinode-945500 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 18:09:28 multinode-945500 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 18:09:28 multinode-945500 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 18:10:28 multinode-945500 kubelet[2114]: E0416 18:10:28.262128    2114 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 18:10:28 multinode-945500 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 18:10:28 multinode-945500 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 18:10:28 multinode-945500 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 18:10:28 multinode-945500 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 18:11:28 multinode-945500 kubelet[2114]: E0416 18:11:28.260184    2114 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 18:11:28 multinode-945500 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 18:11:28 multinode-945500 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 18:11:28 multinode-945500 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 18:11:28 multinode-945500 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 18:12:28 multinode-945500 kubelet[2114]: E0416 18:12:28.266482    2114 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 18:12:28 multinode-945500 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 18:12:28 multinode-945500 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 18:12:28 multinode-945500 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 18:12:28 multinode-945500 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 18:12:21.370945   12892 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-945500 -n multinode-945500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-945500 -n multinode-945500: (10.7539114s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-945500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StartAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StartAfterStop (259.57s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (277.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-945500
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-945500
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-945500: (1m38.3616971s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-945500 --wait=true -v=8 --alsologtostderr
E0416 18:16:07.058609    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-945500 --wait=true -v=8 --alsologtostderr: exit status 90 (2m47.4136269s)

                                                
                                                
-- stdout --
	* [multinode-945500] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-945500" primary control-plane node in "multinode-945500" cluster
	* Restarting existing hyperv VM for "multinode-945500" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 18:14:19.363858   12460 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0416 18:14:19.422235   12460 out.go:291] Setting OutFile to fd 984 ...
	I0416 18:14:19.422235   12460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 18:14:19.422235   12460 out.go:304] Setting ErrFile to fd 768...
	I0416 18:14:19.422235   12460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 18:14:19.440081   12460 out.go:298] Setting JSON to false
	I0416 18:14:19.443284   12460 start.go:129] hostinfo: {"hostname":"minikube5","uptime":28889,"bootTime":1713262370,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0416 18:14:19.443284   12460 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0416 18:14:19.450008   12460 out.go:177] * [multinode-945500] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0416 18:14:19.501091   12460 notify.go:220] Checking for updates...
	I0416 18:14:19.551103   12460 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 18:14:19.552382   12460 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 18:14:19.600073   12460 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0416 18:14:19.600129   12460 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 18:14:19.601088   12460 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 18:14:19.602718   12460 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:14:19.602985   12460 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 18:14:24.501276   12460 out.go:177] * Using the hyperv driver based on existing profile
	I0416 18:14:24.503020   12460 start.go:297] selected driver: hyperv
	I0416 18:14:24.503020   12460 start.go:901] validating driver "hyperv" against &{Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.29.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.91.227 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.91.6 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.85.139 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 18:14:24.503020   12460 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 18:14:24.544109   12460 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 18:14:24.544109   12460 cni.go:84] Creating CNI manager for ""
	I0416 18:14:24.544109   12460 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0416 18:14:24.544109   12460 start.go:340] cluster config:
	{Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-945500 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.91.227 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.91.6 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.85.139 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:f
alse kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 18:14:24.544859   12460 iso.go:125] acquiring lock: {Name:mka3f8eef32f5becd06d05d1d837c2a92a8fa70c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 18:14:24.549271   12460 out.go:177] * Starting "multinode-945500" primary control-plane node in "multinode-945500" cluster
	I0416 18:14:24.600607   12460 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 18:14:24.601275   12460 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0416 18:14:24.601275   12460 cache.go:56] Caching tarball of preloaded images
	I0416 18:14:24.601672   12460 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 18:14:24.601991   12460 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 18:14:24.602421   12460 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 18:14:24.605657   12460 start.go:360] acquireMachinesLock for multinode-945500: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 18:14:24.605880   12460 start.go:364] duration metric: took 83.9µs to acquireMachinesLock for "multinode-945500"
	I0416 18:14:24.606021   12460 start.go:96] Skipping create...Using existing machine configuration
	I0416 18:14:24.606021   12460 fix.go:54] fixHost starting: 
	I0416 18:14:24.606474   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:14:27.122442   12460 main.go:141] libmachine: [stdout =====>] : Off
	
	I0416 18:14:27.123440   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:14:27.123521   12460 fix.go:112] recreateIfNeeded on multinode-945500: state=Stopped err=<nil>
	W0416 18:14:27.123521   12460 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 18:14:27.153914   12460 out.go:177] * Restarting existing hyperv VM for "multinode-945500" ...
	I0416 18:14:27.155729   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-945500
	I0416 18:14:30.432845   12460 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:14:30.432916   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:14:30.432938   12460 main.go:141] libmachine: Waiting for host to start...
	I0416 18:14:30.432938   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:14:32.460369   12460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:14:32.460369   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:14:32.460369   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:14:34.714930   12460 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:14:34.714930   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:14:35.722839   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:14:37.717760   12460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:14:37.718011   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:14:37.718011   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:14:39.981831   12460 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:14:39.981831   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:14:40.993261   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:14:42.945470   12460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:14:42.945470   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:14:42.945548   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:14:45.214263   12460 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:14:45.214263   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:14:46.220259   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:14:48.236776   12460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:14:48.237251   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:14:48.237327   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:14:50.494275   12460 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:14:50.494275   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:14:51.498086   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:14:53.527465   12460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:14:53.527465   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:14:53.527730   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:14:55.896038   12460 main.go:141] libmachine: [stdout =====>] : 172.19.83.232
	
	I0416 18:14:55.896038   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:14:55.898395   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:14:57.816389   12460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:14:57.816389   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:14:57.817462   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:15:00.121263   12460 main.go:141] libmachine: [stdout =====>] : 172.19.83.232
	
	I0416 18:15:00.121263   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:15:00.121762   12460 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 18:15:00.123749   12460 machine.go:94] provisionDockerMachine start ...
	I0416 18:15:00.123837   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:15:02.107555   12460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:15:02.107555   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:15:02.108181   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:15:04.393727   12460 main.go:141] libmachine: [stdout =====>] : 172.19.83.232
	
	I0416 18:15:04.393727   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:15:04.398132   12460 main.go:141] libmachine: Using SSH client type: native
	I0416 18:15:04.398663   12460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.83.232 22 <nil> <nil>}
	I0416 18:15:04.398663   12460 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 18:15:04.538529   12460 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 18:15:04.538633   12460 buildroot.go:166] provisioning hostname "multinode-945500"
	I0416 18:15:04.538633   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:15:06.491892   12460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:15:06.491892   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:15:06.492845   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:15:08.778330   12460 main.go:141] libmachine: [stdout =====>] : 172.19.83.232
	
	I0416 18:15:08.778330   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:15:08.782890   12460 main.go:141] libmachine: Using SSH client type: native
	I0416 18:15:08.783191   12460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.83.232 22 <nil> <nil>}
	I0416 18:15:08.783191   12460 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-945500 && echo "multinode-945500" | sudo tee /etc/hostname
	I0416 18:15:08.950257   12460 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-945500
	
	I0416 18:15:08.950257   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:15:10.943650   12460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:15:10.943650   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:15:10.944022   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:15:13.288076   12460 main.go:141] libmachine: [stdout =====>] : 172.19.83.232
	
	I0416 18:15:13.288608   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:15:13.293787   12460 main.go:141] libmachine: Using SSH client type: native
	I0416 18:15:13.294385   12460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.83.232 22 <nil> <nil>}
	I0416 18:15:13.294385   12460 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-945500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-945500/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-945500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 18:15:13.456028   12460 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 18:15:13.456155   12460 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 18:15:13.456155   12460 buildroot.go:174] setting up certificates
	I0416 18:15:13.456155   12460 provision.go:84] configureAuth start
	I0416 18:15:13.456155   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:15:15.353172   12460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:15:15.354081   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:15:15.354164   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:15:17.688681   12460 main.go:141] libmachine: [stdout =====>] : 172.19.83.232
	
	I0416 18:15:17.688681   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:15:17.688681   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:15:19.605088   12460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:15:19.605088   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:15:19.605387   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:15:21.889913   12460 main.go:141] libmachine: [stdout =====>] : 172.19.83.232
	
	I0416 18:15:21.890922   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:15:21.890922   12460 provision.go:143] copyHostCerts
	I0416 18:15:21.891065   12460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 18:15:21.891235   12460 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 18:15:21.891310   12460 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 18:15:21.891623   12460 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 18:15:21.891964   12460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 18:15:21.892604   12460 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 18:15:21.892604   12460 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 18:15:21.892964   12460 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 18:15:21.893560   12460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 18:15:21.893560   12460 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 18:15:21.893560   12460 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 18:15:21.894093   12460 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 18:15:21.894327   12460 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-945500 san=[127.0.0.1 172.19.83.232 localhost minikube multinode-945500]
	I0416 18:15:22.258464   12460 provision.go:177] copyRemoteCerts
	I0416 18:15:22.267698   12460 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 18:15:22.267698   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:15:24.172852   12460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:15:24.173543   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:15:24.173638   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:15:26.386821   12460 main.go:141] libmachine: [stdout =====>] : 172.19.83.232
	
	I0416 18:15:26.386821   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:15:26.388013   12460 sshutil.go:53] new ssh client: &{IP:172.19.83.232 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 18:15:26.487751   12460 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.2197402s)
	I0416 18:15:26.487783   12460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 18:15:26.488371   12460 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 18:15:26.537330   12460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 18:15:26.537330   12460 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0416 18:15:26.584295   12460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 18:15:26.584295   12460 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 18:15:26.627397   12460 provision.go:87] duration metric: took 13.1704442s to configureAuth
	I0416 18:15:26.627481   12460 buildroot.go:189] setting minikube options for container-runtime
	I0416 18:15:26.627634   12460 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:15:26.627634   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:15:28.529906   12460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:15:28.530776   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:15:28.530776   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:15:30.832502   12460 main.go:141] libmachine: [stdout =====>] : 172.19.83.232
	
	I0416 18:15:30.833506   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:15:30.837714   12460 main.go:141] libmachine: Using SSH client type: native
	I0416 18:15:30.838121   12460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.83.232 22 <nil> <nil>}
	I0416 18:15:30.838121   12460 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 18:15:30.978523   12460 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 18:15:30.978568   12460 buildroot.go:70] root file system type: tmpfs
	I0416 18:15:30.978924   12460 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 18:15:30.978990   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:15:32.874542   12460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:15:32.875546   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:15:32.875607   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:15:35.215419   12460 main.go:141] libmachine: [stdout =====>] : 172.19.83.232
	
	I0416 18:15:35.215419   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:15:35.221119   12460 main.go:141] libmachine: Using SSH client type: native
	I0416 18:15:35.221641   12460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.83.232 22 <nil> <nil>}
	I0416 18:15:35.221742   12460 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 18:15:35.382190   12460 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 18:15:35.382280   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:15:37.292374   12460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:15:37.292374   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:15:37.292374   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:15:39.500457   12460 main.go:141] libmachine: [stdout =====>] : 172.19.83.232
	
	I0416 18:15:39.500457   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:15:39.504406   12460 main.go:141] libmachine: Using SSH client type: native
	I0416 18:15:39.505244   12460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.83.232 22 <nil> <nil>}
	I0416 18:15:39.505325   12460 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 18:15:41.678820   12460 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 18:15:41.678820   12460 machine.go:97] duration metric: took 41.5527117s to provisionDockerMachine
	I0416 18:15:41.678820   12460 start.go:293] postStartSetup for "multinode-945500" (driver="hyperv")
	I0416 18:15:41.678820   12460 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 18:15:41.692211   12460 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 18:15:41.692211   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:15:43.558315   12460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:15:43.558798   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:15:43.558887   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:15:45.885875   12460 main.go:141] libmachine: [stdout =====>] : 172.19.83.232
	
	I0416 18:15:45.886383   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:15:45.886468   12460 sshutil.go:53] new ssh client: &{IP:172.19.83.232 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 18:15:45.991048   12460 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2985925s)
	I0416 18:15:46.001050   12460 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 18:15:46.007258   12460 command_runner.go:130] > NAME=Buildroot
	I0416 18:15:46.007258   12460 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0416 18:15:46.007258   12460 command_runner.go:130] > ID=buildroot
	I0416 18:15:46.007258   12460 command_runner.go:130] > VERSION_ID=2023.02.9
	I0416 18:15:46.007258   12460 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0416 18:15:46.007258   12460 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 18:15:46.007258   12460 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 18:15:46.007783   12460 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 18:15:46.007852   12460 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 18:15:46.007852   12460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 18:15:46.016700   12460 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 18:15:46.034251   12460 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 18:15:46.075044   12460 start.go:296] duration metric: took 4.3959745s for postStartSetup
	I0416 18:15:46.075044   12460 fix.go:56] duration metric: took 1m21.464397s for fixHost
	I0416 18:15:46.075044   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:15:48.088984   12460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:15:48.088984   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:15:48.089333   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:15:50.331222   12460 main.go:141] libmachine: [stdout =====>] : 172.19.83.232
	
	I0416 18:15:50.331222   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:15:50.335703   12460 main.go:141] libmachine: Using SSH client type: native
	I0416 18:15:50.335703   12460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.83.232 22 <nil> <nil>}
	I0416 18:15:50.336220   12460 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0416 18:15:50.485520   12460 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713291350.642900750
	
	I0416 18:15:50.485520   12460 fix.go:216] guest clock: 1713291350.642900750
	I0416 18:15:50.485520   12460 fix.go:229] Guest: 2024-04-16 18:15:50.64290075 +0000 UTC Remote: 2024-04-16 18:15:46.0750442 +0000 UTC m=+86.803260101 (delta=4.56785655s)
	I0416 18:15:50.485520   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:15:52.436454   12460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:15:52.436529   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:15:52.436620   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:15:54.775334   12460 main.go:141] libmachine: [stdout =====>] : 172.19.83.232
	
	I0416 18:15:54.776292   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:15:54.781247   12460 main.go:141] libmachine: Using SSH client type: native
	I0416 18:15:54.781538   12460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.83.232 22 <nil> <nil>}
	I0416 18:15:54.781538   12460 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713291350
	I0416 18:15:54.941140   12460 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 18:15:50 UTC 2024
	
	I0416 18:15:54.941140   12460 fix.go:236] clock set: Tue Apr 16 18:15:50 UTC 2024
	 (err=<nil>)
	I0416 18:15:54.941140   12460 start.go:83] releasing machines lock for "multinode-945500", held for 1m30.330131s
	I0416 18:15:54.941140   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:15:56.816089   12460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:15:56.816089   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:15:56.817017   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:15:59.064592   12460 main.go:141] libmachine: [stdout =====>] : 172.19.83.232
	
	I0416 18:15:59.065361   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:15:59.069119   12460 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 18:15:59.069119   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:15:59.078970   12460 ssh_runner.go:195] Run: cat /version.json
	I0416 18:15:59.078970   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:16:01.026317   12460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:16:01.026317   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:16:01.026790   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:16:01.031980   12460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:16:01.031980   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:16:01.031980   12460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:16:03.394392   12460 main.go:141] libmachine: [stdout =====>] : 172.19.83.232
	
	I0416 18:16:03.394392   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:16:03.395439   12460 sshutil.go:53] new ssh client: &{IP:172.19.83.232 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 18:16:03.426503   12460 main.go:141] libmachine: [stdout =====>] : 172.19.83.232
	
	I0416 18:16:03.426503   12460 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:16:03.426503   12460 sshutil.go:53] new ssh client: &{IP:172.19.83.232 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 18:16:03.495734   12460 command_runner.go:130] > {"iso_version": "v1.33.0-1713236417-18649", "kicbase_version": "v0.0.43-1713215244-18647", "minikube_version": "v1.33.0-beta.0", "commit": "4ec1a3e88a9f3ffb3930e555284d907468ae83a6"}
	I0416 18:16:03.495734   12460 ssh_runner.go:235] Completed: cat /version.json: (4.4165131s)
	I0416 18:16:03.505467   12460 ssh_runner.go:195] Run: systemctl --version
	I0416 18:16:03.736118   12460 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0416 18:16:03.736256   12460 command_runner.go:130] > systemd 252 (252)
	I0416 18:16:03.736256   12460 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6668711s)
	I0416 18:16:03.736256   12460 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0416 18:16:03.748778   12460 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0416 18:16:03.756967   12460 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0416 18:16:03.758237   12460 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 18:16:03.767165   12460 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 18:16:03.794484   12460 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0416 18:16:03.795763   12460 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 18:16:03.795763   12460 start.go:494] detecting cgroup driver to use...
	I0416 18:16:03.796098   12460 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 18:16:03.828552   12460 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0416 18:16:03.838541   12460 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 18:16:03.864744   12460 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 18:16:03.882554   12460 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 18:16:03.893546   12460 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 18:16:03.922376   12460 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 18:16:03.948642   12460 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 18:16:03.975933   12460 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 18:16:04.006451   12460 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 18:16:04.035362   12460 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 18:16:04.065504   12460 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 18:16:04.095256   12460 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 18:16:04.123326   12460 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 18:16:04.141690   12460 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0416 18:16:04.150579   12460 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 18:16:04.179161   12460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:16:04.392610   12460 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 18:16:04.425698   12460 start.go:494] detecting cgroup driver to use...
	I0416 18:16:04.435452   12460 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 18:16:04.461063   12460 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0416 18:16:04.461194   12460 command_runner.go:130] > [Unit]
	I0416 18:16:04.461194   12460 command_runner.go:130] > Description=Docker Application Container Engine
	I0416 18:16:04.461194   12460 command_runner.go:130] > Documentation=https://docs.docker.com
	I0416 18:16:04.461194   12460 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0416 18:16:04.461194   12460 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0416 18:16:04.461194   12460 command_runner.go:130] > StartLimitBurst=3
	I0416 18:16:04.461296   12460 command_runner.go:130] > StartLimitIntervalSec=60
	I0416 18:16:04.461296   12460 command_runner.go:130] > [Service]
	I0416 18:16:04.461296   12460 command_runner.go:130] > Type=notify
	I0416 18:16:04.461296   12460 command_runner.go:130] > Restart=on-failure
	I0416 18:16:04.461296   12460 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0416 18:16:04.461296   12460 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0416 18:16:04.461395   12460 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0416 18:16:04.461395   12460 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0416 18:16:04.461395   12460 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0416 18:16:04.461395   12460 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0416 18:16:04.461395   12460 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0416 18:16:04.461495   12460 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0416 18:16:04.461495   12460 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0416 18:16:04.461495   12460 command_runner.go:130] > ExecStart=
	I0416 18:16:04.461495   12460 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0416 18:16:04.461495   12460 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0416 18:16:04.461495   12460 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0416 18:16:04.461630   12460 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0416 18:16:04.461630   12460 command_runner.go:130] > LimitNOFILE=infinity
	I0416 18:16:04.461630   12460 command_runner.go:130] > LimitNPROC=infinity
	I0416 18:16:04.461630   12460 command_runner.go:130] > LimitCORE=infinity
	I0416 18:16:04.461630   12460 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0416 18:16:04.461630   12460 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0416 18:16:04.461630   12460 command_runner.go:130] > TasksMax=infinity
	I0416 18:16:04.461630   12460 command_runner.go:130] > TimeoutStartSec=0
	I0416 18:16:04.461630   12460 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0416 18:16:04.461759   12460 command_runner.go:130] > Delegate=yes
	I0416 18:16:04.461759   12460 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0416 18:16:04.461759   12460 command_runner.go:130] > KillMode=process
	I0416 18:16:04.461759   12460 command_runner.go:130] > [Install]
	I0416 18:16:04.461834   12460 command_runner.go:130] > WantedBy=multi-user.target
	I0416 18:16:04.471722   12460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 18:16:04.506263   12460 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 18:16:04.548545   12460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 18:16:04.582323   12460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 18:16:04.616991   12460 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 18:16:04.674795   12460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 18:16:04.700719   12460 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 18:16:04.735983   12460 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0416 18:16:04.745143   12460 ssh_runner.go:195] Run: which cri-dockerd
	I0416 18:16:04.750442   12460 command_runner.go:130] > /usr/bin/cri-dockerd
	I0416 18:16:04.759752   12460 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 18:16:04.778019   12460 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 18:16:04.819050   12460 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 18:16:05.014309   12460 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 18:16:05.215040   12460 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 18:16:05.215309   12460 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 18:16:05.255920   12460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:16:05.447095   12460 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 18:17:06.581382   12460 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0416 18:17:06.581382   12460 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0416 18:17:06.582345   12460 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1316103s)
	I0416 18:17:06.592953   12460 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0416 18:17:06.616567   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 systemd[1]: Starting Docker Application Container Engine...
	I0416 18:17:06.616567   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[660]: time="2024-04-16T18:15:40.232140258Z" level=info msg="Starting up"
	I0416 18:17:06.616567   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[660]: time="2024-04-16T18:15:40.233188890Z" level=info msg="containerd not running, starting managed containerd"
	I0416 18:17:06.616567   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[660]: time="2024-04-16T18:15:40.238734385Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=667
	I0416 18:17:06.616567   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.266041430Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0416 18:17:06.616567   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.291268026Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0416 18:17:06.616567   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.291367404Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0416 18:17:06.616567   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.291420446Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0416 18:17:06.616567   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.291432256Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.292144620Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.292228487Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.292382909Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.292510911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.292530326Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.292541435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.292918334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.293497092Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.296310122Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.296428616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.296576633Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.296750371Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.297493860Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.297586134Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.297600545Z" level=info msg="metadata content store policy set" policy=shared
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.303266336Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.303371820Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.303393937Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.303407748Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.303420258Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.303480005Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.303853802Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.303993813Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304083183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304100797Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304114909Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304127218Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304138827Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304151237Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304164047Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304185364Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304198175Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304209183Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304226097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304238807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304249715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304261225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304273234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304285244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304295852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304307261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304319371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304332081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304348794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304360503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304375915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304389426Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304407941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304418449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0416 18:17:06.617586   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304428857Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304469590Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304548552Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304587183Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304621110Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304775532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304922749Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.305010719Z" level=info msg="NRI interface is disabled by configuration."
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.305333474Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.305463077Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.305563357Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.305629009Z" level=info msg="containerd successfully booted in 0.042667s"
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:15:41 multinode-945500 dockerd[660]: time="2024-04-16T18:15:41.277295881Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:15:41 multinode-945500 dockerd[660]: time="2024-04-16T18:15:41.417500553Z" level=info msg="Loading containers: start."
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:15:41 multinode-945500 dockerd[660]: time="2024-04-16T18:15:41.707673656Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:15:41 multinode-945500 dockerd[660]: time="2024-04-16T18:15:41.774770587Z" level=info msg="Loading containers: done."
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:15:41 multinode-945500 dockerd[660]: time="2024-04-16T18:15:41.793324888Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:15:41 multinode-945500 dockerd[660]: time="2024-04-16T18:15:41.794296111Z" level=info msg="Daemon has completed initialization"
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:15:41 multinode-945500 dockerd[660]: time="2024-04-16T18:15:41.835893653Z" level=info msg="API listen on [::]:2376"
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:15:41 multinode-945500 dockerd[660]: time="2024-04-16T18:15:41.836026352Z" level=info msg="API listen on /var/run/docker.sock"
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:15:41 multinode-945500 systemd[1]: Started Docker Application Container Engine.
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:16:05 multinode-945500 systemd[1]: Stopping Docker Application Container Engine...
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:16:05 multinode-945500 dockerd[660]: time="2024-04-16T18:16:05.617982285Z" level=info msg="Processing signal 'terminated'"
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:16:05 multinode-945500 dockerd[660]: time="2024-04-16T18:16:05.620238145Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:16:05 multinode-945500 dockerd[660]: time="2024-04-16T18:16:05.620788333Z" level=info msg="Daemon shutdown complete"
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:16:05 multinode-945500 dockerd[660]: time="2024-04-16T18:16:05.620836140Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:16:05 multinode-945500 dockerd[660]: time="2024-04-16T18:16:05.620860844Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:16:06 multinode-945500 systemd[1]: docker.service: Deactivated successfully.
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:16:06 multinode-945500 systemd[1]: Stopped Docker Application Container Engine.
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:16:06 multinode-945500 systemd[1]: Starting Docker Application Container Engine...
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:16:06 multinode-945500 dockerd[1051]: time="2024-04-16T18:16:06.706231191Z" level=info msg="Starting up"
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:17:06 multinode-945500 dockerd[1051]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:17:06 multinode-945500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:17:06 multinode-945500 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0416 18:17:06.618528   12460 command_runner.go:130] > Apr 16 18:17:06 multinode-945500 systemd[1]: Failed to start Docker Application Container Engine.
	I0416 18:17:06.624532   12460 out.go:177] 
	W0416 18:17:06.624532   12460 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 16 18:15:40 multinode-945500 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 18:15:40 multinode-945500 dockerd[660]: time="2024-04-16T18:15:40.232140258Z" level=info msg="Starting up"
	Apr 16 18:15:40 multinode-945500 dockerd[660]: time="2024-04-16T18:15:40.233188890Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 18:15:40 multinode-945500 dockerd[660]: time="2024-04-16T18:15:40.238734385Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=667
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.266041430Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.291268026Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.291367404Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.291420446Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.291432256Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.292144620Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.292228487Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.292382909Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.292510911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.292530326Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.292541435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.292918334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.293497092Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.296310122Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.296428616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.296576633Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.296750371Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.297493860Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.297586134Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.297600545Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.303266336Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.303371820Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.303393937Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.303407748Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.303420258Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.303480005Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.303853802Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.303993813Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304083183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304100797Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304114909Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304127218Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304138827Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304151237Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304164047Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304185364Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304198175Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304209183Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304226097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304238807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304249715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304261225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304273234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304285244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304295852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304307261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304319371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304332081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304348794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304360503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304375915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304389426Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304407941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304418449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304428857Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304469590Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304548552Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304587183Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304621110Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304775532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304922749Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.305010719Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.305333474Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.305463077Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.305563357Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.305629009Z" level=info msg="containerd successfully booted in 0.042667s"
	Apr 16 18:15:41 multinode-945500 dockerd[660]: time="2024-04-16T18:15:41.277295881Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 18:15:41 multinode-945500 dockerd[660]: time="2024-04-16T18:15:41.417500553Z" level=info msg="Loading containers: start."
	Apr 16 18:15:41 multinode-945500 dockerd[660]: time="2024-04-16T18:15:41.707673656Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 16 18:15:41 multinode-945500 dockerd[660]: time="2024-04-16T18:15:41.774770587Z" level=info msg="Loading containers: done."
	Apr 16 18:15:41 multinode-945500 dockerd[660]: time="2024-04-16T18:15:41.793324888Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 18:15:41 multinode-945500 dockerd[660]: time="2024-04-16T18:15:41.794296111Z" level=info msg="Daemon has completed initialization"
	Apr 16 18:15:41 multinode-945500 dockerd[660]: time="2024-04-16T18:15:41.835893653Z" level=info msg="API listen on [::]:2376"
	Apr 16 18:15:41 multinode-945500 dockerd[660]: time="2024-04-16T18:15:41.836026352Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 18:15:41 multinode-945500 systemd[1]: Started Docker Application Container Engine.
	Apr 16 18:16:05 multinode-945500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 18:16:05 multinode-945500 dockerd[660]: time="2024-04-16T18:16:05.617982285Z" level=info msg="Processing signal 'terminated'"
	Apr 16 18:16:05 multinode-945500 dockerd[660]: time="2024-04-16T18:16:05.620238145Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 18:16:05 multinode-945500 dockerd[660]: time="2024-04-16T18:16:05.620788333Z" level=info msg="Daemon shutdown complete"
	Apr 16 18:16:05 multinode-945500 dockerd[660]: time="2024-04-16T18:16:05.620836140Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 18:16:05 multinode-945500 dockerd[660]: time="2024-04-16T18:16:05.620860844Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 18:16:06 multinode-945500 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 18:16:06 multinode-945500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 18:16:06 multinode-945500 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 18:16:06 multinode-945500 dockerd[1051]: time="2024-04-16T18:16:06.706231191Z" level=info msg="Starting up"
	Apr 16 18:17:06 multinode-945500 dockerd[1051]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 16 18:17:06 multinode-945500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 16 18:17:06 multinode-945500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 16 18:17:06 multinode-945500 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 16 18:15:40 multinode-945500 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 18:15:40 multinode-945500 dockerd[660]: time="2024-04-16T18:15:40.232140258Z" level=info msg="Starting up"
	Apr 16 18:15:40 multinode-945500 dockerd[660]: time="2024-04-16T18:15:40.233188890Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 18:15:40 multinode-945500 dockerd[660]: time="2024-04-16T18:15:40.238734385Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=667
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.266041430Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.291268026Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.291367404Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.291420446Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.291432256Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.292144620Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.292228487Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.292382909Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.292510911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.292530326Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.292541435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.292918334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.293497092Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.296310122Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.296428616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.296576633Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.296750371Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.297493860Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.297586134Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.297600545Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.303266336Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.303371820Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.303393937Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.303407748Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.303420258Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.303480005Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.303853802Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.303993813Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304083183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304100797Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304114909Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304127218Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304138827Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304151237Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304164047Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304185364Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304198175Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304209183Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304226097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304238807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304249715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304261225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304273234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304285244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304295852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304307261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304319371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304332081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304348794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304360503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304375915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304389426Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304407941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304418449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304428857Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304469590Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304548552Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304587183Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304621110Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304775532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.304922749Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.305010719Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.305333474Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.305463077Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.305563357Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 18:15:40 multinode-945500 dockerd[667]: time="2024-04-16T18:15:40.305629009Z" level=info msg="containerd successfully booted in 0.042667s"
	Apr 16 18:15:41 multinode-945500 dockerd[660]: time="2024-04-16T18:15:41.277295881Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 18:15:41 multinode-945500 dockerd[660]: time="2024-04-16T18:15:41.417500553Z" level=info msg="Loading containers: start."
	Apr 16 18:15:41 multinode-945500 dockerd[660]: time="2024-04-16T18:15:41.707673656Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 16 18:15:41 multinode-945500 dockerd[660]: time="2024-04-16T18:15:41.774770587Z" level=info msg="Loading containers: done."
	Apr 16 18:15:41 multinode-945500 dockerd[660]: time="2024-04-16T18:15:41.793324888Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 18:15:41 multinode-945500 dockerd[660]: time="2024-04-16T18:15:41.794296111Z" level=info msg="Daemon has completed initialization"
	Apr 16 18:15:41 multinode-945500 dockerd[660]: time="2024-04-16T18:15:41.835893653Z" level=info msg="API listen on [::]:2376"
	Apr 16 18:15:41 multinode-945500 dockerd[660]: time="2024-04-16T18:15:41.836026352Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 18:15:41 multinode-945500 systemd[1]: Started Docker Application Container Engine.
	Apr 16 18:16:05 multinode-945500 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 18:16:05 multinode-945500 dockerd[660]: time="2024-04-16T18:16:05.617982285Z" level=info msg="Processing signal 'terminated'"
	Apr 16 18:16:05 multinode-945500 dockerd[660]: time="2024-04-16T18:16:05.620238145Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 18:16:05 multinode-945500 dockerd[660]: time="2024-04-16T18:16:05.620788333Z" level=info msg="Daemon shutdown complete"
	Apr 16 18:16:05 multinode-945500 dockerd[660]: time="2024-04-16T18:16:05.620836140Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 18:16:05 multinode-945500 dockerd[660]: time="2024-04-16T18:16:05.620860844Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 18:16:06 multinode-945500 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 18:16:06 multinode-945500 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 18:16:06 multinode-945500 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 18:16:06 multinode-945500 dockerd[1051]: time="2024-04-16T18:16:06.706231191Z" level=info msg="Starting up"
	Apr 16 18:17:06 multinode-945500 dockerd[1051]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 16 18:17:06 multinode-945500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 16 18:17:06 multinode-945500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 16 18:17:06 multinode-945500 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0416 18:17:06.625527   12460 out.go:239] * 
	* 
	W0416 18:17:06.626528   12460 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 18:17:06.626528   12460 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-945500" : exit status 90
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-945500
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-945500	172.19.91.227
multinode-945500-m02	172.19.91.6
multinode-945500-m03	172.19.85.139

                                                
                                                
After restart: multinode-945500	172.19.83.232
multinode-945500-m02	172.19.91.6
multinode-945500-m03	172.19.85.139
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-945500 -n multinode-945500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-945500 -n multinode-945500: exit status 6 (11.0913334s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 18:17:07.075327    9772 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0416 18:17:18.019178    9772 status.go:417] kubeconfig endpoint: get endpoint: "multinode-945500" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-945500" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (277.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (32.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-945500 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-945500 node delete m03: exit status 103 (6.5331001s)

                                                
                                                
-- stdout --
	* The control-plane node multinode-945500 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p multinode-945500"

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 18:17:18.176539    4140 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:418: node delete returned an error. args "out/minikube-windows-amd64.exe -p multinode-945500 node delete m03": exit status 103
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-945500 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-945500 status --alsologtostderr: exit status 7 (14.5897711s)

                                                
                                                
-- stdout --
	multinode-945500
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`
	multinode-945500-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	
	multinode-945500-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 18:17:24.712217    2744 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0416 18:17:24.763918    2744 out.go:291] Setting OutFile to fd 972 ...
	I0416 18:17:24.764874    2744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 18:17:24.764874    2744 out.go:304] Setting ErrFile to fd 824...
	I0416 18:17:24.764874    2744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 18:17:24.782195    2744 out.go:298] Setting JSON to false
	I0416 18:17:24.782195    2744 mustload.go:65] Loading cluster: multinode-945500
	I0416 18:17:24.782195    2744 notify.go:220] Checking for updates...
	I0416 18:17:24.783179    2744 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:17:24.783179    2744 status.go:255] checking status of multinode-945500 ...
	I0416 18:17:24.784250    2744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:17:26.708440    2744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:17:26.708559    2744 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:17:26.708559    2744 status.go:330] multinode-945500 host status = "Running" (err=<nil>)
	I0416 18:17:26.708663    2744 host.go:66] Checking if "multinode-945500" exists ...
	I0416 18:17:26.709494    2744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:17:28.672473    2744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:17:28.672982    2744 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:17:28.673037    2744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:17:30.996790    2744 main.go:141] libmachine: [stdout =====>] : 172.19.83.232
	
	I0416 18:17:30.996790    2744 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:17:30.996790    2744 host.go:66] Checking if "multinode-945500" exists ...
	I0416 18:17:31.005525    2744 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 18:17:31.005525    2744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:17:32.908809    2744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:17:32.909735    2744 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:17:32.909816    2744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:17:35.161675    2744 main.go:141] libmachine: [stdout =====>] : 172.19.83.232
	
	I0416 18:17:35.161675    2744 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:17:35.161675    2744 sshutil.go:53] new ssh client: &{IP:172.19.83.232 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 18:17:35.256643    2744 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.2508768s)
	I0416 18:17:35.264968    2744 ssh_runner.go:195] Run: systemctl --version
	I0416 18:17:35.285235    2744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0416 18:17:35.308603    2744 status.go:417] kubeconfig endpoint: get endpoint: "multinode-945500" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 18:17:35.308670    2744 api_server.go:166] Checking apiserver status ...
	I0416 18:17:35.319507    2744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0416 18:17:35.338232    2744 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0416 18:17:35.338275    2744 status.go:422] multinode-945500 apiserver status = Stopped (err=<nil>)
	I0416 18:17:35.338373    2744 status.go:257] multinode-945500 status: &{Name:multinode-945500 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 18:17:35.338373    2744 status.go:255] checking status of multinode-945500-m02 ...
	I0416 18:17:35.338373    2744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:17:37.267731    2744 main.go:141] libmachine: [stdout =====>] : Off
	
	I0416 18:17:37.267883    2744 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:17:37.267963    2744 status.go:330] multinode-945500-m02 host status = "Stopped" (err=<nil>)
	I0416 18:17:37.268005    2744 status.go:343] host is not running, skipping remaining checks
	I0416 18:17:37.268005    2744 status.go:257] multinode-945500-m02 status: &{Name:multinode-945500-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0416 18:17:37.268005    2744 status.go:255] checking status of multinode-945500-m03 ...
	I0416 18:17:37.268611    2744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:17:39.173195    2744 main.go:141] libmachine: [stdout =====>] : Off
	
	I0416 18:17:39.173195    2744 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:17:39.173195    2744 status.go:330] multinode-945500-m03 host status = "Stopped" (err=<nil>)
	I0416 18:17:39.173670    2744 status.go:343] host is not running, skipping remaining checks
	I0416 18:17:39.173670    2744 status.go:257] multinode-945500-m03 status: &{Name:multinode-945500-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-945500 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-945500 -n multinode-945500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-945500 -n multinode-945500: exit status 6 (10.8988095s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 18:17:39.309272   10224 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0416 18:17:50.046592   10224 status.go:417] kubeconfig endpoint: get endpoint: "multinode-945500" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-945500" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/DeleteNode (32.02s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (99.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-945500 stop
E0416 18:19:10.316908    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
multinode_test.go:345: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-945500 stop: (1m25.4234461s)
multinode_test.go:351: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-945500 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-945500 status: exit status 7 (5.9437322s)

                                                
                                                
-- stdout --
	multinode-945500
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-945500-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	
	multinode-945500-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 18:19:15.632234     940 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-945500 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-945500 status --alsologtostderr: exit status 7 (5.957313s)

                                                
                                                
-- stdout --
	multinode-945500
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-945500-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	
	multinode-945500-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 18:19:21.587965    4876 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0416 18:19:21.643231    4876 out.go:291] Setting OutFile to fd 876 ...
	I0416 18:19:21.644189    4876 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 18:19:21.644189    4876 out.go:304] Setting ErrFile to fd 672...
	I0416 18:19:21.644189    4876 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 18:19:21.656866    4876 out.go:298] Setting JSON to false
	I0416 18:19:21.656866    4876 mustload.go:65] Loading cluster: multinode-945500
	I0416 18:19:21.657837    4876 notify.go:220] Checking for updates...
	I0416 18:19:21.657971    4876 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:19:21.657971    4876 status.go:255] checking status of multinode-945500 ...
	I0416 18:19:21.659127    4876 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:19:23.630807    4876 main.go:141] libmachine: [stdout =====>] : Off
	
	I0416 18:19:23.630807    4876 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:19:23.630913    4876 status.go:330] multinode-945500 host status = "Stopped" (err=<nil>)
	I0416 18:19:23.630913    4876 status.go:343] host is not running, skipping remaining checks
	I0416 18:19:23.630913    4876 status.go:257] multinode-945500 status: &{Name:multinode-945500 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 18:19:23.631031    4876 status.go:255] checking status of multinode-945500-m02 ...
	I0416 18:19:23.631687    4876 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:19:25.528235    4876 main.go:141] libmachine: [stdout =====>] : Off
	
	I0416 18:19:25.528235    4876 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:19:25.528235    4876 status.go:330] multinode-945500-m02 host status = "Stopped" (err=<nil>)
	I0416 18:19:25.528235    4876 status.go:343] host is not running, skipping remaining checks
	I0416 18:19:25.528235    4876 status.go:257] multinode-945500-m02 status: &{Name:multinode-945500-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0416 18:19:25.528235    4876 status.go:255] checking status of multinode-945500-m03 ...
	I0416 18:19:25.529083    4876 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:19:27.406158    4876 main.go:141] libmachine: [stdout =====>] : Off
	
	I0416 18:19:27.406158    4876 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:19:27.407003    4876 status.go:330] multinode-945500-m03 host status = "Stopped" (err=<nil>)
	I0416 18:19:27.407003    4876 status.go:343] host is not running, skipping remaining checks
	I0416 18:19:27.407003    4876 status.go:257] multinode-945500-m03 status: &{Name:multinode-945500-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-windows-amd64.exe -p multinode-945500 status --alsologtostderr": multinode-945500
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode-945500-m02
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
multinode-945500-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-windows-amd64.exe -p multinode-945500 status --alsologtostderr": multinode-945500
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode-945500-m02
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
multinode-945500-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-945500 -n multinode-945500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-945500 -n multinode-945500: exit status 7 (2.1295304s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 18:19:27.545202    2688 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-945500" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (99.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (324.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-945500 --wait=true -v=8 --alsologtostderr --driver=hyperv
E0416 18:21:07.090893    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-945500 --wait=true -v=8 --alsologtostderr --driver=hyperv: exit status 90 (4m52.6715507s)

                                                
                                                
-- stdout --
	* [multinode-945500] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-945500" primary control-plane node in "multinode-945500" cluster
	* Restarting existing hyperv VM for "multinode-945500" ...
	* Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	
	* Starting "multinode-945500-m02" worker node in "multinode-945500" cluster
	* Restarting existing hyperv VM for "multinode-945500-m02" ...
	* Found network options:
	  - NO_PROXY=172.19.83.104
	  - NO_PROXY=172.19.83.104
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 18:19:29.663124    6100 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0416 18:19:29.713253    6100 out.go:291] Setting OutFile to fd 828 ...
	I0416 18:19:29.714291    6100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 18:19:29.714291    6100 out.go:304] Setting ErrFile to fd 884...
	I0416 18:19:29.714291    6100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 18:19:29.736940    6100 out.go:298] Setting JSON to false
	I0416 18:19:29.739598    6100 start.go:129] hostinfo: {"hostname":"minikube5","uptime":29199,"bootTime":1713262370,"procs":200,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0416 18:19:29.739598    6100 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0416 18:19:29.741462    6100 out.go:177] * [multinode-945500] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0416 18:19:29.741462    6100 notify.go:220] Checking for updates...
	I0416 18:19:29.741462    6100 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 18:19:29.743426    6100 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 18:19:29.743949    6100 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0416 18:19:29.744501    6100 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 18:19:29.745073    6100 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 18:19:29.746122    6100 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:19:29.747981    6100 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 18:19:34.540526    6100 out.go:177] * Using the hyperv driver based on existing profile
	I0416 18:19:34.540718    6100 start.go:297] selected driver: hyperv
	I0416 18:19:34.540718    6100 start.go:901] validating driver "hyperv" against &{Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.29.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.83.232 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.91.6 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.85.139 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingres
s:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 18:19:34.541390    6100 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 18:19:34.584517    6100 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 18:19:34.584517    6100 cni.go:84] Creating CNI manager for ""
	I0416 18:19:34.584517    6100 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0416 18:19:34.584517    6100 start.go:340] cluster config:
	{Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-945500 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.83.232 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.91.6 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.85.139 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:f
alse kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 18:19:34.584517    6100 iso.go:125] acquiring lock: {Name:mka3f8eef32f5becd06d05d1d837c2a92a8fa70c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 18:19:34.585999    6100 out.go:177] * Starting "multinode-945500" primary control-plane node in "multinode-945500" cluster
	I0416 18:19:34.586606    6100 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 18:19:34.587302    6100 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0416 18:19:34.587302    6100 cache.go:56] Caching tarball of preloaded images
	I0416 18:19:34.587441    6100 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 18:19:34.587441    6100 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 18:19:34.588075    6100 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 18:19:34.589695    6100 start.go:360] acquireMachinesLock for multinode-945500: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 18:19:34.590036    6100 start.go:364] duration metric: took 341.6µs to acquireMachinesLock for "multinode-945500"
	I0416 18:19:34.590036    6100 start.go:96] Skipping create...Using existing machine configuration
	I0416 18:19:34.590036    6100 fix.go:54] fixHost starting: 
	I0416 18:19:34.590734    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:19:37.035867    6100 main.go:141] libmachine: [stdout =====>] : Off
	
	I0416 18:19:37.035867    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:19:37.036414    6100 fix.go:112] recreateIfNeeded on multinode-945500: state=Stopped err=<nil>
	W0416 18:19:37.036443    6100 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 18:19:37.037461    6100 out.go:177] * Restarting existing hyperv VM for "multinode-945500" ...
	I0416 18:19:37.038370    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-945500
	I0416 18:19:39.684634    6100 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:19:39.684634    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:19:39.684634    6100 main.go:141] libmachine: Waiting for host to start...
	I0416 18:19:39.684634    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:19:41.686593    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:19:41.686680    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:19:41.686680    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:19:43.975342    6100 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:19:43.975342    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:19:44.978533    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:19:47.010715    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:19:47.010715    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:19:47.010812    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:19:49.319391    6100 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:19:49.319391    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:19:50.321898    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:19:52.351754    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:19:52.351754    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:19:52.352018    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:19:54.664580    6100 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:19:54.664809    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:19:55.678807    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:19:57.651910    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:19:57.651910    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:19:57.651910    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:19:59.899268    6100 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:19:59.899268    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:00.906345    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:20:02.927492    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:20:02.927492    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:02.928435    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:20:05.322020    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:20:05.322020    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:05.324180    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:20:07.277382    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:20:07.277382    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:07.277382    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:20:09.610017    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:20:09.610066    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:09.610066    6100 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 18:20:09.612167    6100 machine.go:94] provisionDockerMachine start ...
	I0416 18:20:09.612232    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:20:11.583873    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:20:11.583873    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:11.583873    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:20:13.902027    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:20:13.902027    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:13.905785    6100 main.go:141] libmachine: Using SSH client type: native
	I0416 18:20:13.906582    6100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.83.104 22 <nil> <nil>}
	I0416 18:20:13.906582    6100 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 18:20:14.037561    6100 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 18:20:14.037758    6100 buildroot.go:166] provisioning hostname "multinode-945500"
	I0416 18:20:14.037758    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:20:15.886948    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:20:15.888002    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:15.888031    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:20:18.221981    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:20:18.221981    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:18.227529    6100 main.go:141] libmachine: Using SSH client type: native
	I0416 18:20:18.228140    6100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.83.104 22 <nil> <nil>}
	I0416 18:20:18.228140    6100 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-945500 && echo "multinode-945500" | sudo tee /etc/hostname
	I0416 18:20:18.392067    6100 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-945500
	
	I0416 18:20:18.392067    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:20:20.345161    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:20:20.345161    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:20.345404    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:20:22.575320    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:20:22.575320    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:22.579474    6100 main.go:141] libmachine: Using SSH client type: native
	I0416 18:20:22.579885    6100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.83.104 22 <nil> <nil>}
	I0416 18:20:22.579885    6100 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-945500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-945500/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-945500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 18:20:22.731877    6100 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 18:20:22.732031    6100 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 18:20:22.732107    6100 buildroot.go:174] setting up certificates
	I0416 18:20:22.732107    6100 provision.go:84] configureAuth start
	I0416 18:20:22.732199    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:20:24.704769    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:20:24.705086    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:24.705274    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:20:27.027891    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:20:27.027891    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:27.028603    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:20:28.944328    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:20:28.944513    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:28.944513    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:20:31.236918    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:20:31.237060    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:31.237060    6100 provision.go:143] copyHostCerts
	I0416 18:20:31.237060    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 18:20:31.237060    6100 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 18:20:31.237060    6100 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 18:20:31.237732    6100 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 18:20:31.238878    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 18:20:31.238934    6100 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 18:20:31.238934    6100 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 18:20:31.238934    6100 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 18:20:31.240190    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 18:20:31.240190    6100 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 18:20:31.240190    6100 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 18:20:31.240190    6100 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 18:20:31.240878    6100 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-945500 san=[127.0.0.1 172.19.83.104 localhost minikube multinode-945500]
	I0416 18:20:31.794591    6100 provision.go:177] copyRemoteCerts
	I0416 18:20:31.802576    6100 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 18:20:31.802576    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:20:33.710432    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:20:33.711149    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:33.711149    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:20:36.003295    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:20:36.003295    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:36.004027    6100 sshutil.go:53] new ssh client: &{IP:172.19.83.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 18:20:36.114016    6100 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3109358s)
	I0416 18:20:36.114106    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 18:20:36.114670    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 18:20:36.154557    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 18:20:36.155367    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 18:20:36.195293    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 18:20:36.195512    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0416 18:20:36.234818    6100 provision.go:87] duration metric: took 13.5019442s to configureAuth
	I0416 18:20:36.234818    6100 buildroot.go:189] setting minikube options for container-runtime
	I0416 18:20:36.235561    6100 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:20:36.235647    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:20:38.121086    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:20:38.121086    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:38.121351    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:20:40.392545    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:20:40.392545    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:40.397568    6100 main.go:141] libmachine: Using SSH client type: native
	I0416 18:20:40.398089    6100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.83.104 22 <nil> <nil>}
	I0416 18:20:40.398089    6100 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 18:20:40.543936    6100 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 18:20:40.544045    6100 buildroot.go:70] root file system type: tmpfs
	I0416 18:20:40.544176    6100 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 18:20:40.544295    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:20:42.429691    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:20:42.429691    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:42.429773    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:20:44.646262    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:20:44.646262    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:44.650984    6100 main.go:141] libmachine: Using SSH client type: native
	I0416 18:20:44.650984    6100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.83.104 22 <nil> <nil>}
	I0416 18:20:44.650984    6100 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 18:20:44.816673    6100 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 18:20:44.816673    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:20:46.731043    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:20:46.732029    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:46.732101    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:20:49.051075    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:20:49.051075    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:49.057279    6100 main.go:141] libmachine: Using SSH client type: native
	I0416 18:20:49.057888    6100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.83.104 22 <nil> <nil>}
	I0416 18:20:49.057888    6100 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 18:20:51.294265    6100 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 18:20:51.294265    6100 machine.go:97] duration metric: took 41.6797306s to provisionDockerMachine
	I0416 18:20:51.294265    6100 start.go:293] postStartSetup for "multinode-945500" (driver="hyperv")
	I0416 18:20:51.294265    6100 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 18:20:51.305946    6100 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 18:20:51.305946    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:20:53.261389    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:20:53.262349    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:53.262515    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:20:55.558863    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:20:55.559709    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:55.560067    6100 sshutil.go:53] new ssh client: &{IP:172.19.83.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 18:20:55.677259    6100 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3710643s)
	I0416 18:20:55.687682    6100 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 18:20:55.692956    6100 command_runner.go:130] > NAME=Buildroot
	I0416 18:20:55.692956    6100 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0416 18:20:55.692956    6100 command_runner.go:130] > ID=buildroot
	I0416 18:20:55.692956    6100 command_runner.go:130] > VERSION_ID=2023.02.9
	I0416 18:20:55.692956    6100 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0416 18:20:55.694204    6100 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 18:20:55.694286    6100 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 18:20:55.694798    6100 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 18:20:55.696124    6100 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 18:20:55.696187    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 18:20:55.705933    6100 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 18:20:55.722841    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 18:20:55.766224    6100 start.go:296] duration metric: took 4.4717048s for postStartSetup
	I0416 18:20:55.766327    6100 fix.go:56] duration metric: took 1m21.1716799s for fixHost
	I0416 18:20:55.766327    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:20:57.654600    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:20:57.655594    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:57.655628    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:20:59.909578    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:20:59.909578    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:59.913240    6100 main.go:141] libmachine: Using SSH client type: native
	I0416 18:20:59.913877    6100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.83.104 22 <nil> <nil>}
	I0416 18:20:59.913877    6100 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0416 18:21:00.048276    6100 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713291660.212492767
	
	I0416 18:21:00.048276    6100 fix.go:216] guest clock: 1713291660.212492767
	I0416 18:21:00.048276    6100 fix.go:229] Guest: 2024-04-16 18:21:00.212492767 +0000 UTC Remote: 2024-04-16 18:20:55.7663274 +0000 UTC m=+86.183018801 (delta=4.446165367s)
	I0416 18:21:00.048276    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:21:01.958531    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:21:01.958531    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:21:01.958531    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:21:04.245872    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:21:04.245872    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:21:04.249936    6100 main.go:141] libmachine: Using SSH client type: native
	I0416 18:21:04.250593    6100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.83.104 22 <nil> <nil>}
	I0416 18:21:04.250688    6100 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713291660
	I0416 18:21:04.396802    6100 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 18:21:00 UTC 2024
	
	I0416 18:21:04.396802    6100 fix.go:236] clock set: Tue Apr 16 18:21:00 UTC 2024
	 (err=<nil>)
	I0416 18:21:04.396802    6100 start.go:83] releasing machines lock for "multinode-945500", held for 1m29.8016651s
	I0416 18:21:04.397497    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:21:06.391026    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:21:06.391713    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:21:06.391792    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:21:08.724007    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:21:08.724007    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:21:08.729729    6100 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 18:21:08.729810    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:21:08.745701    6100 ssh_runner.go:195] Run: cat /version.json
	I0416 18:21:08.745701    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:21:10.765717    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:21:10.765717    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:21:10.766516    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:21:10.766964    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:21:10.767082    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:21:10.767175    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:21:13.200149    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:21:13.200149    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:21:13.200479    6100 sshutil.go:53] new ssh client: &{IP:172.19.83.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 18:21:13.234421    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:21:13.235339    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:21:13.235821    6100 sshutil.go:53] new ssh client: &{IP:172.19.83.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 18:21:13.431351    6100 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0416 18:21:13.431470    6100 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.7013922s)
	I0416 18:21:13.431470    6100 command_runner.go:130] > {"iso_version": "v1.33.0-1713236417-18649", "kicbase_version": "v0.0.43-1713215244-18647", "minikube_version": "v1.33.0-beta.0", "commit": "4ec1a3e88a9f3ffb3930e555284d907468ae83a6"}
	I0416 18:21:13.431635    6100 ssh_runner.go:235] Completed: cat /version.json: (4.6855024s)
	I0416 18:21:13.440877    6100 ssh_runner.go:195] Run: systemctl --version
	I0416 18:21:13.449293    6100 command_runner.go:130] > systemd 252 (252)
	I0416 18:21:13.449293    6100 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0416 18:21:13.457907    6100 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0416 18:21:13.464975    6100 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0416 18:21:13.465092    6100 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 18:21:13.475362    6100 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 18:21:13.498990    6100 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0416 18:21:13.499426    6100 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 18:21:13.499426    6100 start.go:494] detecting cgroup driver to use...
	I0416 18:21:13.499426    6100 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 18:21:13.527926    6100 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0416 18:21:13.539397    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 18:21:13.567918    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 18:21:13.586342    6100 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 18:21:13.593613    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 18:21:13.619605    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 18:21:13.647518    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 18:21:13.671517    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 18:21:13.700034    6100 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 18:21:13.729590    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 18:21:13.758931    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 18:21:13.785229    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 18:21:13.819152    6100 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 18:21:13.837863    6100 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0416 18:21:13.847027    6100 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 18:21:13.871883    6100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:21:14.059448    6100 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 18:21:14.090660    6100 start.go:494] detecting cgroup driver to use...
	I0416 18:21:14.099280    6100 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 18:21:14.124204    6100 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0416 18:21:14.124204    6100 command_runner.go:130] > [Unit]
	I0416 18:21:14.124204    6100 command_runner.go:130] > Description=Docker Application Container Engine
	I0416 18:21:14.124204    6100 command_runner.go:130] > Documentation=https://docs.docker.com
	I0416 18:21:14.124204    6100 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0416 18:21:14.124204    6100 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0416 18:21:14.124204    6100 command_runner.go:130] > StartLimitBurst=3
	I0416 18:21:14.124204    6100 command_runner.go:130] > StartLimitIntervalSec=60
	I0416 18:21:14.124204    6100 command_runner.go:130] > [Service]
	I0416 18:21:14.124204    6100 command_runner.go:130] > Type=notify
	I0416 18:21:14.124204    6100 command_runner.go:130] > Restart=on-failure
	I0416 18:21:14.124204    6100 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0416 18:21:14.124204    6100 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0416 18:21:14.124204    6100 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0416 18:21:14.124204    6100 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0416 18:21:14.124204    6100 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0416 18:21:14.124204    6100 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0416 18:21:14.124204    6100 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0416 18:21:14.124204    6100 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0416 18:21:14.124204    6100 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0416 18:21:14.124204    6100 command_runner.go:130] > ExecStart=
	I0416 18:21:14.124204    6100 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0416 18:21:14.124204    6100 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0416 18:21:14.124204    6100 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0416 18:21:14.124204    6100 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0416 18:21:14.124204    6100 command_runner.go:130] > LimitNOFILE=infinity
	I0416 18:21:14.124204    6100 command_runner.go:130] > LimitNPROC=infinity
	I0416 18:21:14.124204    6100 command_runner.go:130] > LimitCORE=infinity
	I0416 18:21:14.124204    6100 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0416 18:21:14.124204    6100 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0416 18:21:14.124204    6100 command_runner.go:130] > TasksMax=infinity
	I0416 18:21:14.124204    6100 command_runner.go:130] > TimeoutStartSec=0
	I0416 18:21:14.124204    6100 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0416 18:21:14.124204    6100 command_runner.go:130] > Delegate=yes
	I0416 18:21:14.124204    6100 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0416 18:21:14.124204    6100 command_runner.go:130] > KillMode=process
	I0416 18:21:14.124204    6100 command_runner.go:130] > [Install]
	I0416 18:21:14.125341    6100 command_runner.go:130] > WantedBy=multi-user.target
	I0416 18:21:14.137155    6100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 18:21:14.169383    6100 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 18:21:14.208199    6100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 18:21:14.238621    6100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 18:21:14.271453    6100 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 18:21:14.316438    6100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 18:21:14.338599    6100 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 18:21:14.373634    6100 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0416 18:21:14.384311    6100 ssh_runner.go:195] Run: which cri-dockerd
	I0416 18:21:14.390675    6100 command_runner.go:130] > /usr/bin/cri-dockerd
	I0416 18:21:14.402419    6100 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 18:21:14.419197    6100 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 18:21:14.463750    6100 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 18:21:14.654123    6100 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 18:21:14.834262    6100 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 18:21:14.834536    6100 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 18:21:14.872316    6100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:21:15.057607    6100 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 18:21:17.594720    6100 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5369094s)
	I0416 18:21:17.604067    6100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0416 18:21:17.639346    6100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 18:21:17.671723    6100 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0416 18:21:17.857796    6100 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0416 18:21:18.049141    6100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:21:18.235522    6100 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0416 18:21:18.277990    6100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 18:21:18.311732    6100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:21:18.473958    6100 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0416 18:21:18.571850    6100 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0416 18:21:18.584682    6100 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0416 18:21:18.595121    6100 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0416 18:21:18.595121    6100 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0416 18:21:18.595121    6100 command_runner.go:130] > Device: 0,22	Inode: 847         Links: 1
	I0416 18:21:18.595121    6100 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0416 18:21:18.595121    6100 command_runner.go:130] > Access: 2024-04-16 18:21:18.663583254 +0000
	I0416 18:21:18.595121    6100 command_runner.go:130] > Modify: 2024-04-16 18:21:18.663583254 +0000
	I0416 18:21:18.595121    6100 command_runner.go:130] > Change: 2024-04-16 18:21:18.666583320 +0000
	I0416 18:21:18.595121    6100 command_runner.go:130] >  Birth: -
	I0416 18:21:18.595121    6100 start.go:562] Will wait 60s for crictl version
	I0416 18:21:18.603830    6100 ssh_runner.go:195] Run: which crictl
	I0416 18:21:18.609112    6100 command_runner.go:130] > /usr/bin/crictl
	I0416 18:21:18.617790    6100 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 18:21:18.663486    6100 command_runner.go:130] > Version:  0.1.0
	I0416 18:21:18.663486    6100 command_runner.go:130] > RuntimeName:  docker
	I0416 18:21:18.663900    6100 command_runner.go:130] > RuntimeVersion:  26.0.1
	I0416 18:21:18.663900    6100 command_runner.go:130] > RuntimeApiVersion:  v1
	I0416 18:21:18.667396    6100 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0416 18:21:18.676387    6100 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 18:21:18.703477    6100 command_runner.go:130] > 26.0.1
	I0416 18:21:18.713969    6100 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 18:21:18.740947    6100 command_runner.go:130] > 26.0.1
	I0416 18:21:18.742951    6100 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
	I0416 18:21:18.742951    6100 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0416 18:21:18.748950    6100 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0416 18:21:18.748950    6100 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0416 18:21:18.748950    6100 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0416 18:21:18.748950    6100 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:6f:a4 Flags:up|broadcast|multicast|running}
	I0416 18:21:18.751948    6100 ip.go:210] interface addr: fe80::6b96:eca7:5afa:def5/64
	I0416 18:21:18.751948    6100 ip.go:210] interface addr: 172.19.80.1/20
	I0416 18:21:18.759949    6100 ssh_runner.go:195] Run: grep 172.19.80.1	host.minikube.internal$ /etc/hosts
	I0416 18:21:18.764976    6100 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 18:21:18.788835    6100 kubeadm.go:877] updating cluster {Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.29.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.83.104 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.91.6 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.85.139 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 18:21:18.789098    6100 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 18:21:18.795942    6100 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 18:21:18.818294    6100 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.29.3
	I0416 18:21:18.818364    6100 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.29.3
	I0416 18:21:18.818364    6100 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.29.3
	I0416 18:21:18.818364    6100 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.29.3
	I0416 18:21:18.818364    6100 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0416 18:21:18.818364    6100 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0416 18:21:18.818364    6100 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0416 18:21:18.818364    6100 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0416 18:21:18.818364    6100 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 18:21:18.818364    6100 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0416 18:21:18.818364    6100 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0416 18:21:18.818364    6100 docker.go:615] Images already preloaded, skipping extraction
	I0416 18:21:18.826242    6100 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 18:21:18.847002    6100 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.29.3
	I0416 18:21:18.847002    6100 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.29.3
	I0416 18:21:18.847002    6100 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.29.3
	I0416 18:21:18.847002    6100 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.29.3
	I0416 18:21:18.847002    6100 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0416 18:21:18.847002    6100 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0416 18:21:18.847114    6100 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0416 18:21:18.847114    6100 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0416 18:21:18.847114    6100 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 18:21:18.847114    6100 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0416 18:21:18.847362    6100 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0416 18:21:18.847362    6100 cache_images.go:84] Images are preloaded, skipping loading
	I0416 18:21:18.847362    6100 kubeadm.go:928] updating node { 172.19.83.104 8443 v1.29.3 docker true true} ...
	I0416 18:21:18.847362    6100 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-945500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.83.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 18:21:18.854031    6100 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0416 18:21:18.880193    6100 command_runner.go:130] > cgroupfs
	I0416 18:21:18.881510    6100 cni.go:84] Creating CNI manager for ""
	I0416 18:21:18.881577    6100 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0416 18:21:18.881648    6100 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 18:21:18.881714    6100 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.83.104 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-945500 NodeName:multinode-945500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.83.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.83.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 18:21:18.881972    6100 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.83.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-945500"
	  kubeletExtraArgs:
	    node-ip: 172.19.83.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.83.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 18:21:18.892131    6100 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 18:21:18.910498    6100 command_runner.go:130] > kubeadm
	I0416 18:21:18.910498    6100 command_runner.go:130] > kubectl
	I0416 18:21:18.910498    6100 command_runner.go:130] > kubelet
	I0416 18:21:18.910498    6100 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 18:21:18.921883    6100 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 18:21:18.938104    6100 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0416 18:21:18.971657    6100 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 18:21:18.998326    6100 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0416 18:21:19.040348    6100 ssh_runner.go:195] Run: grep 172.19.83.104	control-plane.minikube.internal$ /etc/hosts
	I0416 18:21:19.046898    6100 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.83.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 18:21:19.074680    6100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:21:19.246173    6100 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 18:21:19.272609    6100 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500 for IP: 172.19.83.104
	I0416 18:21:19.272723    6100 certs.go:194] generating shared ca certs ...
	I0416 18:21:19.272801    6100 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:21:19.273027    6100 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0416 18:21:19.273630    6100 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0416 18:21:19.273630    6100 certs.go:256] generating profile certs ...
	I0416 18:21:19.275057    6100 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\client.key
	I0416 18:21:19.275287    6100 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key.ee50f9d4
	I0416 18:21:19.275512    6100 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt.ee50f9d4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.83.104]
	I0416 18:21:19.618188    6100 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt.ee50f9d4 ...
	I0416 18:21:19.618188    6100 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt.ee50f9d4: {Name:mk1f72169f6e81bcfcbe83fa03b26f15975d58c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:21:19.619201    6100 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key.ee50f9d4 ...
	I0416 18:21:19.620217    6100 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key.ee50f9d4: {Name:mk7bbb58856f4723240bed121ab9ecb0a828f1b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:21:19.621324    6100 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt.ee50f9d4 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt
	I0416 18:21:19.631251    6100 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key.ee50f9d4 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key
	I0416 18:21:19.632245    6100 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.key
	I0416 18:21:19.632245    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 18:21:19.633238    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0416 18:21:19.633238    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 18:21:19.633238    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 18:21:19.633238    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0416 18:21:19.633238    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0416 18:21:19.633238    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0416 18:21:19.633238    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0416 18:21:19.633238    6100 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem (1338 bytes)
	W0416 18:21:19.634823    6100 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324_empty.pem, impossibly tiny 0 bytes
	I0416 18:21:19.634823    6100 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0416 18:21:19.634823    6100 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0416 18:21:19.635441    6100 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0416 18:21:19.635601    6100 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0416 18:21:19.635601    6100 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem (1708 bytes)
	I0416 18:21:19.636190    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem -> /usr/share/ca-certificates/5324.pem
	I0416 18:21:19.636190    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /usr/share/ca-certificates/53242.pem
	I0416 18:21:19.636190    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:21:19.637352    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 18:21:19.679344    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 18:21:19.723365    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 18:21:19.767143    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 18:21:19.809881    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0416 18:21:19.850881    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 18:21:19.893280    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 18:21:19.936699    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 18:21:19.980979    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem --> /usr/share/ca-certificates/5324.pem (1338 bytes)
	I0416 18:21:20.022061    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /usr/share/ca-certificates/53242.pem (1708 bytes)
	I0416 18:21:20.061219    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 18:21:20.098345    6100 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 18:21:20.137354    6100 ssh_runner.go:195] Run: openssl version
	I0416 18:21:20.145510    6100 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0416 18:21:20.157784    6100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5324.pem && ln -fs /usr/share/ca-certificates/5324.pem /etc/ssl/certs/5324.pem"
	I0416 18:21:20.182016    6100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5324.pem
	I0416 18:21:20.189100    6100 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 16 16:35 /usr/share/ca-certificates/5324.pem
	I0416 18:21:20.189100    6100 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:35 /usr/share/ca-certificates/5324.pem
	I0416 18:21:20.198972    6100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5324.pem
	I0416 18:21:20.206503    6100 command_runner.go:130] > 51391683
	I0416 18:21:20.214143    6100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5324.pem /etc/ssl/certs/51391683.0"
	I0416 18:21:20.240946    6100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/53242.pem && ln -fs /usr/share/ca-certificates/53242.pem /etc/ssl/certs/53242.pem"
	I0416 18:21:20.279061    6100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/53242.pem
	I0416 18:21:20.286584    6100 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 16 16:35 /usr/share/ca-certificates/53242.pem
	I0416 18:21:20.286584    6100 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:35 /usr/share/ca-certificates/53242.pem
	I0416 18:21:20.294975    6100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/53242.pem
	I0416 18:21:20.303724    6100 command_runner.go:130] > 3ec20f2e
	I0416 18:21:20.313469    6100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/53242.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 18:21:20.341996    6100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 18:21:20.367682    6100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:21:20.374174    6100 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 16 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:21:20.374174    6100 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:21:20.385312    6100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:21:20.393735    6100 command_runner.go:130] > b5213941
	I0416 18:21:20.401441    6100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 18:21:20.430085    6100 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 18:21:20.436874    6100 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 18:21:20.437565    6100 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0416 18:21:20.437565    6100 command_runner.go:130] > Device: 8,1	Inode: 9431342     Links: 1
	I0416 18:21:20.437565    6100 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0416 18:21:20.437627    6100 command_runner.go:130] > Access: 2024-04-16 17:57:16.870126444 +0000
	I0416 18:21:20.437674    6100 command_runner.go:130] > Modify: 2024-04-16 17:57:16.870126444 +0000
	I0416 18:21:20.437725    6100 command_runner.go:130] > Change: 2024-04-16 17:57:16.870126444 +0000
	I0416 18:21:20.437725    6100 command_runner.go:130] >  Birth: 2024-04-16 17:57:16.870126444 +0000
	I0416 18:21:20.446281    6100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 18:21:20.454044    6100 command_runner.go:130] > Certificate will not expire
	I0416 18:21:20.464317    6100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 18:21:20.473358    6100 command_runner.go:130] > Certificate will not expire
	I0416 18:21:20.481843    6100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 18:21:20.491216    6100 command_runner.go:130] > Certificate will not expire
	I0416 18:21:20.499222    6100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 18:21:20.507808    6100 command_runner.go:130] > Certificate will not expire
	I0416 18:21:20.516170    6100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 18:21:20.525030    6100 command_runner.go:130] > Certificate will not expire
	I0416 18:21:20.534005    6100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 18:21:20.545195    6100 command_runner.go:130] > Certificate will not expire
	I0416 18:21:20.545195    6100 kubeadm.go:391] StartCluster: {Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
9.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.83.104 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.91.6 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.85.139 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:
false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 18:21:20.552385    6100 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0416 18:21:20.584656    6100 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0416 18:21:20.601391    6100 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0416 18:21:20.602241    6100 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0416 18:21:20.602241    6100 command_runner.go:130] > /var/lib/minikube/etcd:
	I0416 18:21:20.602241    6100 command_runner.go:130] > member
	W0416 18:21:20.602241    6100 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 18:21:20.602406    6100 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 18:21:20.602406    6100 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 18:21:20.612829    6100 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 18:21:20.630324    6100 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 18:21:20.631336    6100 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-945500" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 18:21:20.632502    6100 kubeconfig.go:62] C:\Users\jenkins.minikube5\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-945500" cluster setting kubeconfig missing "multinode-945500" context setting]
	I0416 18:21:20.633185    6100 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:21:20.648952    6100 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 18:21:20.649835    6100 kapi.go:59] client config for multinode-945500: &rest.Config{Host:"https://172.19.83.104:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500/client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500/client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef16c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 18:21:20.651583    6100 cert_rotation.go:137] Starting client certificate rotation controller
	I0416 18:21:20.663021    6100 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 18:21:20.681082    6100 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0416 18:21:20.681725    6100 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0416 18:21:20.681725    6100 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0416 18:21:20.681725    6100 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0416 18:21:20.681725    6100 command_runner.go:130] >  kind: InitConfiguration
	I0416 18:21:20.681725    6100 command_runner.go:130] >  localAPIEndpoint:
	I0416 18:21:20.681725    6100 command_runner.go:130] > -  advertiseAddress: 172.19.91.227
	I0416 18:21:20.681725    6100 command_runner.go:130] > +  advertiseAddress: 172.19.83.104
	I0416 18:21:20.681725    6100 command_runner.go:130] >    bindPort: 8443
	I0416 18:21:20.681725    6100 command_runner.go:130] >  bootstrapTokens:
	I0416 18:21:20.681725    6100 command_runner.go:130] >    - groups:
	I0416 18:21:20.681725    6100 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0416 18:21:20.681725    6100 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0416 18:21:20.681725    6100 command_runner.go:130] >    name: "multinode-945500"
	I0416 18:21:20.681725    6100 command_runner.go:130] >    kubeletExtraArgs:
	I0416 18:21:20.681725    6100 command_runner.go:130] > -    node-ip: 172.19.91.227
	I0416 18:21:20.681725    6100 command_runner.go:130] > +    node-ip: 172.19.83.104
	I0416 18:21:20.681725    6100 command_runner.go:130] >    taints: []
	I0416 18:21:20.681725    6100 command_runner.go:130] >  ---
	I0416 18:21:20.681725    6100 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0416 18:21:20.681725    6100 command_runner.go:130] >  kind: ClusterConfiguration
	I0416 18:21:20.681725    6100 command_runner.go:130] >  apiServer:
	I0416 18:21:20.681725    6100 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.19.91.227"]
	I0416 18:21:20.681725    6100 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.19.83.104"]
	I0416 18:21:20.681725    6100 command_runner.go:130] >    extraArgs:
	I0416 18:21:20.681725    6100 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0416 18:21:20.681725    6100 command_runner.go:130] >  controllerManager:
	I0416 18:21:20.681725    6100 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.19.91.227
	+  advertiseAddress: 172.19.83.104
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-945500"
	   kubeletExtraArgs:
	-    node-ip: 172.19.91.227
	+    node-ip: 172.19.83.104
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.19.91.227"]
	+  certSANs: ["127.0.0.1", "localhost", "172.19.83.104"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0416 18:21:20.681725    6100 kubeadm.go:1154] stopping kube-system containers ...
	I0416 18:21:20.688408    6100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0416 18:21:20.711124    6100 command_runner.go:130] > 6ad0b1d75a1e
	I0416 18:21:20.712006    6100 command_runner.go:130] > 2b470472d009
	I0416 18:21:20.712006    6100 command_runner.go:130] > 6f233a9704ee
	I0416 18:21:20.712006    6100 command_runner.go:130] > 2ba60ece6840
	I0416 18:21:20.712006    6100 command_runner.go:130] > cd37920f1d54
	I0416 18:21:20.712006    6100 command_runner.go:130] > f56880607ce1
	I0416 18:21:20.712133    6100 command_runner.go:130] > d2cd68d7f406
	I0416 18:21:20.712133    6100 command_runner.go:130] > 68766d2b671f
	I0416 18:21:20.712133    6100 command_runner.go:130] > 736259e5d03b
	I0416 18:21:20.712133    6100 command_runner.go:130] > 4a7c8d9808b6
	I0416 18:21:20.712133    6100 command_runner.go:130] > 91288754cb0b
	I0416 18:21:20.712133    6100 command_runner.go:130] > 0cae708a3787
	I0416 18:21:20.712133    6100 command_runner.go:130] > 5f7e5b16341d
	I0416 18:21:20.712133    6100 command_runner.go:130] > ecb0ceb1a3fe
	I0416 18:21:20.712133    6100 command_runner.go:130] > b8699d93388d
	I0416 18:21:20.712243    6100 command_runner.go:130] > d28c611e0605
	I0416 18:21:20.712243    6100 docker.go:483] Stopping containers: [6ad0b1d75a1e 2b470472d009 6f233a9704ee 2ba60ece6840 cd37920f1d54 f56880607ce1 d2cd68d7f406 68766d2b671f 736259e5d03b 4a7c8d9808b6 91288754cb0b 0cae708a3787 5f7e5b16341d ecb0ceb1a3fe b8699d93388d d28c611e0605]
	I0416 18:21:20.719536    6100 ssh_runner.go:195] Run: docker stop 6ad0b1d75a1e 2b470472d009 6f233a9704ee 2ba60ece6840 cd37920f1d54 f56880607ce1 d2cd68d7f406 68766d2b671f 736259e5d03b 4a7c8d9808b6 91288754cb0b 0cae708a3787 5f7e5b16341d ecb0ceb1a3fe b8699d93388d d28c611e0605
	I0416 18:21:20.745732    6100 command_runner.go:130] > 6ad0b1d75a1e
	I0416 18:21:20.745732    6100 command_runner.go:130] > 2b470472d009
	I0416 18:21:20.745732    6100 command_runner.go:130] > 6f233a9704ee
	I0416 18:21:20.745732    6100 command_runner.go:130] > 2ba60ece6840
	I0416 18:21:20.745732    6100 command_runner.go:130] > cd37920f1d54
	I0416 18:21:20.745732    6100 command_runner.go:130] > f56880607ce1
	I0416 18:21:20.745732    6100 command_runner.go:130] > d2cd68d7f406
	I0416 18:21:20.745732    6100 command_runner.go:130] > 68766d2b671f
	I0416 18:21:20.745732    6100 command_runner.go:130] > 736259e5d03b
	I0416 18:21:20.745732    6100 command_runner.go:130] > 4a7c8d9808b6
	I0416 18:21:20.745732    6100 command_runner.go:130] > 91288754cb0b
	I0416 18:21:20.745732    6100 command_runner.go:130] > 0cae708a3787
	I0416 18:21:20.745732    6100 command_runner.go:130] > 5f7e5b16341d
	I0416 18:21:20.745732    6100 command_runner.go:130] > ecb0ceb1a3fe
	I0416 18:21:20.745732    6100 command_runner.go:130] > b8699d93388d
	I0416 18:21:20.745732    6100 command_runner.go:130] > d28c611e0605
	I0416 18:21:20.757003    6100 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 18:21:20.790178    6100 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 18:21:20.806208    6100 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0416 18:21:20.806208    6100 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0416 18:21:20.806208    6100 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0416 18:21:20.806665    6100 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 18:21:20.806791    6100 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 18:21:20.806791    6100 kubeadm.go:156] found existing configuration files:
	
	I0416 18:21:20.817877    6100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 18:21:20.833161    6100 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 18:21:20.833837    6100 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 18:21:20.841401    6100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 18:21:20.869610    6100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 18:21:20.884918    6100 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 18:21:20.885095    6100 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 18:21:20.893856    6100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 18:21:20.922902    6100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 18:21:20.937552    6100 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 18:21:20.937913    6100 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 18:21:20.946598    6100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 18:21:20.972072    6100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 18:21:20.988305    6100 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 18:21:20.988368    6100 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 18:21:20.996217    6100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 18:21:21.022601    6100 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 18:21:21.040025    6100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 18:21:21.270121    6100 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 18:21:21.270121    6100 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0416 18:21:21.270121    6100 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0416 18:21:21.270121    6100 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 18:21:21.270121    6100 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0416 18:21:21.270121    6100 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0416 18:21:21.270121    6100 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0416 18:21:21.270281    6100 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0416 18:21:21.270281    6100 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0416 18:21:21.270281    6100 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 18:21:21.270281    6100 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 18:21:21.270343    6100 command_runner.go:130] > [certs] Using the existing "sa" key
	I0416 18:21:21.270392    6100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 18:21:22.493311    6100 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 18:21:22.493311    6100 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 18:21:22.493311    6100 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 18:21:22.493311    6100 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 18:21:22.493311    6100 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 18:21:22.493311    6100 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 18:21:22.493311    6100 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.2228504s)
	I0416 18:21:22.493311    6100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 18:21:22.769800    6100 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 18:21:22.770187    6100 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 18:21:22.770187    6100 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0416 18:21:22.770247    6100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 18:21:22.865592    6100 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 18:21:22.865730    6100 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 18:21:22.865730    6100 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 18:21:22.865730    6100 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 18:21:22.865807    6100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 18:21:22.967260    6100 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 18:21:22.967260    6100 api_server.go:52] waiting for apiserver process to appear ...
	I0416 18:21:22.980139    6100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 18:21:23.496671    6100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 18:21:23.983300    6100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 18:21:24.482657    6100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 18:21:24.513872    6100 command_runner.go:130] > 1832
	I0416 18:21:24.513872    6100 api_server.go:72] duration metric: took 1.5465242s to wait for apiserver process to appear ...
	I0416 18:21:24.513872    6100 api_server.go:88] waiting for apiserver healthz status ...
	I0416 18:21:24.513872    6100 api_server.go:253] Checking apiserver healthz at https://172.19.83.104:8443/healthz ...
	I0416 18:21:27.605327    6100 api_server.go:279] https://172.19.83.104:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 18:21:27.605562    6100 api_server.go:103] status: https://172.19.83.104:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 18:21:27.605645    6100 api_server.go:253] Checking apiserver healthz at https://172.19.83.104:8443/healthz ...
	I0416 18:21:27.689316    6100 api_server.go:279] https://172.19.83.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 18:21:27.689945    6100 api_server.go:103] status: https://172.19.83.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 18:21:28.027157    6100 api_server.go:253] Checking apiserver healthz at https://172.19.83.104:8443/healthz ...
	I0416 18:21:28.035853    6100 api_server.go:279] https://172.19.83.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 18:21:28.035853    6100 api_server.go:103] status: https://172.19.83.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 18:21:28.525991    6100 api_server.go:253] Checking apiserver healthz at https://172.19.83.104:8443/healthz ...
	I0416 18:21:28.535157    6100 api_server.go:279] https://172.19.83.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 18:21:28.535447    6100 api_server.go:103] status: https://172.19.83.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 18:21:29.028437    6100 api_server.go:253] Checking apiserver healthz at https://172.19.83.104:8443/healthz ...
	I0416 18:21:29.041488    6100 api_server.go:279] https://172.19.83.104:8443/healthz returned 200:
	ok
	I0416 18:21:29.042204    6100 round_trippers.go:463] GET https://172.19.83.104:8443/version
	I0416 18:21:29.042204    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:29.042204    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:29.042204    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:29.051782    6100 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0416 18:21:29.051782    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:29.051782    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:29.051782    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:29.051782    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:29.051782    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:29.051782    6100 round_trippers.go:580]     Content-Length: 263
	I0416 18:21:29.051782    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:29 GMT
	I0416 18:21:29.051782    6100 round_trippers.go:580]     Audit-Id: 309c1c07-9def-49d0-a541-d12180c9534f
	I0416 18:21:29.051782    6100 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "29",
	  "gitVersion": "v1.29.3",
	  "gitCommit": "6813625b7cd706db5bc7388921be03071e1a492d",
	  "gitTreeState": "clean",
	  "buildDate": "2024-03-14T23:58:36Z",
	  "goVersion": "go1.21.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0416 18:21:29.051782    6100 api_server.go:141] control plane version: v1.29.3
	I0416 18:21:29.051782    6100 api_server.go:131] duration metric: took 4.5376523s to wait for apiserver health ...
	I0416 18:21:29.051782    6100 cni.go:84] Creating CNI manager for ""
	I0416 18:21:29.051782    6100 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0416 18:21:29.052809    6100 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0416 18:21:29.061782    6100 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0416 18:21:29.069810    6100 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0416 18:21:29.070339    6100 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0416 18:21:29.070339    6100 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0416 18:21:29.070339    6100 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0416 18:21:29.070433    6100 command_runner.go:130] > Access: 2024-04-16 18:20:04.000071600 +0000
	I0416 18:21:29.070433    6100 command_runner.go:130] > Modify: 2024-04-16 08:43:32.000000000 +0000
	I0416 18:21:29.070433    6100 command_runner.go:130] > Change: 2024-04-16 18:19:54.261000000 +0000
	I0416 18:21:29.070433    6100 command_runner.go:130] >  Birth: -
	I0416 18:21:29.070433    6100 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0416 18:21:29.070433    6100 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0416 18:21:29.115294    6100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0416 18:21:29.947912    6100 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0416 18:21:29.947912    6100 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0416 18:21:29.947972    6100 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0416 18:21:29.947972    6100 command_runner.go:130] > daemonset.apps/kindnet configured
	I0416 18:21:29.948026    6100 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 18:21:29.948162    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods
	I0416 18:21:29.948233    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:29.948233    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:29.948233    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:29.953281    6100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:21:29.953353    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:29.953353    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:29.953353    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:29.953353    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:29.953353    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:30 GMT
	I0416 18:21:29.953353    6100 round_trippers.go:580]     Audit-Id: b5739e7d-5af9-4993-82e3-9fd5366cc000
	I0416 18:21:29.953353    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:29.954799    6100 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1408"},"items":[{"metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"1399","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 73060 chars]
	I0416 18:21:29.959976    6100 system_pods.go:59] 10 kube-system pods found
	I0416 18:21:29.960036    6100 system_pods.go:61] "coredns-76f75df574-86z7h" [1ca004a0-0575-4576-a5ed-ba0891a7d277] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 18:21:29.960036    6100 system_pods.go:61] "etcd-multinode-945500" [7c7a0e73-a281-4231-95c7-479afeb4945c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0416 18:21:29.960097    6100 system_pods.go:61] "kindnet-7pg6g" [b4887fd4-c2ff-40a2-ab8f-89e227151faa] Running
	I0416 18:21:29.960097    6100 system_pods.go:61] "kindnet-tp7jl" [91595b62-10ee-47cb-a0c9-2ca83ad70af7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0416 18:21:29.960097    6100 system_pods.go:61] "kube-apiserver-multinode-945500" [249203ba-a5d5-4e35-af8e-172d64c91440] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0416 18:21:29.960097    6100 system_pods.go:61] "kube-controller-manager-multinode-945500" [01b937c2-9827-4240-83f0-3536fec5eb5e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0416 18:21:29.960097    6100 system_pods.go:61] "kube-proxy-q5bdr" [18f90e3f-dd52-44a3-918a-66181a779f58] Running
	I0416 18:21:29.960097    6100 system_pods.go:61] "kube-proxy-rfxsg" [b740e6e0-4768-4dd4-a958-307662a92578] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0416 18:21:29.960097    6100 system_pods.go:61] "kube-scheduler-multinode-945500" [a09e52e8-1ac2-4c22-8a3d-57969fae85a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0416 18:21:29.960097    6100 system_pods.go:61] "storage-provisioner" [3bd5cc95-eef6-473e-b6f9-898568046f1b] Running
	I0416 18:21:29.960186    6100 system_pods.go:74] duration metric: took 12.0698ms to wait for pod list to return data ...
	I0416 18:21:29.960186    6100 node_conditions.go:102] verifying NodePressure condition ...
	I0416 18:21:29.960235    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes
	I0416 18:21:29.960321    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:29.960321    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:29.960321    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:29.966003    6100 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 18:21:29.966003    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:29.966003    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:30 GMT
	I0416 18:21:29.967019    6100 round_trippers.go:580]     Audit-Id: 34b4f8b3-d2b1-43d6-92c3-479b07bd154b
	I0416 18:21:29.967019    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:29.967019    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:29.967019    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:29.967076    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:29.967181    6100 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1408"},"items":[{"metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1387","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"ma
nagedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v [truncated 10249 chars]
	I0416 18:21:29.968131    6100 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 18:21:29.968131    6100 node_conditions.go:123] node cpu capacity is 2
	I0416 18:21:29.968131    6100 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 18:21:29.968131    6100 node_conditions.go:123] node cpu capacity is 2
	I0416 18:21:29.968131    6100 node_conditions.go:105] duration metric: took 7.9449ms to run NodePressure ...
	I0416 18:21:29.968131    6100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 18:21:30.275784    6100 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0416 18:21:30.275997    6100 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0416 18:21:30.276065    6100 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0416 18:21:30.276400    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0416 18:21:30.276400    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:30.276400    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:30.276400    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:30.281498    6100 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 18:21:30.281498    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:30.281498    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:30.281498    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:30 GMT
	I0416 18:21:30.281498    6100 round_trippers.go:580]     Audit-Id: b84d62b7-6c4b-49a5-84aa-1b7b861f0277
	I0416 18:21:30.281498    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:30.281498    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:30.281498    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:30.282506    6100 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1412"},"items":[{"metadata":{"name":"etcd-multinode-945500","namespace":"kube-system","uid":"7c7a0e73-a281-4231-95c7-479afeb4945c","resourceVersion":"1397","creationTimestamp":"2024-04-16T18:21:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.83.104:2379","kubernetes.io/config.hash":"b1890793e21da4e3dbcc47d4da1ff041","kubernetes.io/config.mirror":"b1890793e21da4e3dbcc47d4da1ff041","kubernetes.io/config.seen":"2024-04-16T18:21:23.147214167Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:21:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 30532 chars]
	I0416 18:21:30.283505    6100 kubeadm.go:733] kubelet initialised
	I0416 18:21:30.283505    6100 kubeadm.go:734] duration metric: took 7.4391ms waiting for restarted kubelet to initialise ...
	I0416 18:21:30.283505    6100 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 18:21:30.283505    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods
	I0416 18:21:30.283505    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:30.283505    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:30.283505    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:30.287514    6100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:21:30.287514    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:30.287514    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:30.287514    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:30.287514    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:30.287514    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:30 GMT
	I0416 18:21:30.287514    6100 round_trippers.go:580]     Audit-Id: 58bb7d51-526d-438e-a8db-45efc3438395
	I0416 18:21:30.287514    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:30.288510    6100 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1412"},"items":[{"metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"1399","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 72676 chars]
	I0416 18:21:30.291499    6100 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-86z7h" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:30.291499    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 18:21:30.291499    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:30.291499    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:30.291499    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:30.294512    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:30.294512    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:30.294512    6100 round_trippers.go:580]     Audit-Id: 3ea8cc92-4cb4-4311-acd1-8e9fbef70dd4
	I0416 18:21:30.295281    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:30.295370    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:30.295370    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:30.295370    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:30.295422    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:30 GMT
	I0416 18:21:30.295602    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"1399","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0416 18:21:30.296420    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:30.296420    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:30.296420    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:30.296420    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:30.299460    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:30.299460    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:30.299460    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:30.299460    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:30.299460    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:30.299460    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:30.299460    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:30 GMT
	I0416 18:21:30.299460    6100 round_trippers.go:580]     Audit-Id: 87a2f897-d1fa-4256-91fd-5a9c081676ee
	I0416 18:21:30.299460    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1387","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5372 chars]
	I0416 18:21:30.300108    6100 pod_ready.go:97] node "multinode-945500" hosting pod "coredns-76f75df574-86z7h" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-945500" has status "Ready":"False"
	I0416 18:21:30.300108    6100 pod_ready.go:81] duration metric: took 8.6092ms for pod "coredns-76f75df574-86z7h" in "kube-system" namespace to be "Ready" ...
	E0416 18:21:30.300193    6100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-945500" hosting pod "coredns-76f75df574-86z7h" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-945500" has status "Ready":"False"
	I0416 18:21:30.300193    6100 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:30.300312    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-945500
	I0416 18:21:30.300312    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:30.300312    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:30.300312    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:30.303098    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:30.303098    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:30.303098    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:30.303098    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:30.303098    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:30.303098    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:30 GMT
	I0416 18:21:30.303098    6100 round_trippers.go:580]     Audit-Id: 051209a3-c3e2-4a59-af16-9942e174a927
	I0416 18:21:30.303098    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:30.303098    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-945500","namespace":"kube-system","uid":"7c7a0e73-a281-4231-95c7-479afeb4945c","resourceVersion":"1397","creationTimestamp":"2024-04-16T18:21:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.83.104:2379","kubernetes.io/config.hash":"b1890793e21da4e3dbcc47d4da1ff041","kubernetes.io/config.mirror":"b1890793e21da4e3dbcc47d4da1ff041","kubernetes.io/config.seen":"2024-04-16T18:21:23.147214167Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:21:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6384 chars]
	I0416 18:21:30.303780    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:30.303814    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:30.303814    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:30.303814    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:30.307071    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:30.307071    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:30.307071    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:30.307133    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:30.307133    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:30.307133    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:30.307133    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:30 GMT
	I0416 18:21:30.307133    6100 round_trippers.go:580]     Audit-Id: 4a8e0bcf-a76d-466a-bbf6-903f4b7d36db
	I0416 18:21:30.307133    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1387","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5372 chars]
	I0416 18:21:30.307133    6100 pod_ready.go:97] node "multinode-945500" hosting pod "etcd-multinode-945500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-945500" has status "Ready":"False"
	I0416 18:21:30.307667    6100 pod_ready.go:81] duration metric: took 7.4739ms for pod "etcd-multinode-945500" in "kube-system" namespace to be "Ready" ...
	E0416 18:21:30.307667    6100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-945500" hosting pod "etcd-multinode-945500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-945500" has status "Ready":"False"
	I0416 18:21:30.307667    6100 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:30.307667    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-945500
	I0416 18:21:30.307783    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:30.307783    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:30.307783    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:30.310951    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:30.311251    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:30.311251    6100 round_trippers.go:580]     Audit-Id: b9d06344-e19d-4859-b5a5-ee75d232210d
	I0416 18:21:30.311251    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:30.311251    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:30.311251    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:30.311251    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:30.311251    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:30 GMT
	I0416 18:21:30.311458    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-945500","namespace":"kube-system","uid":"249203ba-a5d5-4e35-af8e-172d64c91440","resourceVersion":"1408","creationTimestamp":"2024-04-16T18:21:29Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.83.104:8443","kubernetes.io/config.hash":"2693abda4b2acecd43625f54801b2092","kubernetes.io/config.mirror":"2693abda4b2acecd43625f54801b2092","kubernetes.io/config.seen":"2024-04-16T18:21:23.093778187Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:21:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7939 chars]
	I0416 18:21:30.311518    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:30.311518    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:30.311518    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:30.311518    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:30.314094    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:30.314094    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:30.314094    6100 round_trippers.go:580]     Audit-Id: ce6d498e-7b38-4548-9298-f20f3a1424de
	I0416 18:21:30.314094    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:30.314094    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:30.314094    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:30.314094    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:30.314094    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:30 GMT
	I0416 18:21:30.314729    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1387","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5372 chars]
	I0416 18:21:30.315150    6100 pod_ready.go:97] node "multinode-945500" hosting pod "kube-apiserver-multinode-945500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-945500" has status "Ready":"False"
	I0416 18:21:30.315150    6100 pod_ready.go:81] duration metric: took 7.4826ms for pod "kube-apiserver-multinode-945500" in "kube-system" namespace to be "Ready" ...
	E0416 18:21:30.315207    6100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-945500" hosting pod "kube-apiserver-multinode-945500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-945500" has status "Ready":"False"
	I0416 18:21:30.315207    6100 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:30.315292    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-945500
	I0416 18:21:30.315292    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:30.315292    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:30.315292    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:30.317072    6100 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0416 18:21:30.317845    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:30.317845    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:30 GMT
	I0416 18:21:30.317845    6100 round_trippers.go:580]     Audit-Id: ed90919a-665f-40e1-8702-99be45c6731a
	I0416 18:21:30.317845    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:30.317845    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:30.317845    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:30.317845    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:30.318438    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-945500","namespace":"kube-system","uid":"01b937c2-9827-4240-83f0-3536fec5eb5e","resourceVersion":"1392","creationTimestamp":"2024-04-16T17:57:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5db71de2029227779432bddd337fc81d","kubernetes.io/config.mirror":"5db71de2029227779432bddd337fc81d","kubernetes.io/config.seen":"2024-04-16T17:57:28.101473146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7732 chars]
	I0416 18:21:30.357180    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:30.357275    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:30.357275    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:30.357275    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:30.360511    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:30.360511    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:30.360511    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:30.360511    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:30.360511    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:30 GMT
	I0416 18:21:30.360511    6100 round_trippers.go:580]     Audit-Id: 9fdf9aad-1db4-40df-a245-6dbce6256e46
	I0416 18:21:30.360511    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:30.360511    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:30.360511    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1387","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5372 chars]
	I0416 18:21:30.361326    6100 pod_ready.go:97] node "multinode-945500" hosting pod "kube-controller-manager-multinode-945500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-945500" has status "Ready":"False"
	I0416 18:21:30.361326    6100 pod_ready.go:81] duration metric: took 46.1161ms for pod "kube-controller-manager-multinode-945500" in "kube-system" namespace to be "Ready" ...
	E0416 18:21:30.361326    6100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-945500" hosting pod "kube-controller-manager-multinode-945500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-945500" has status "Ready":"False"
	I0416 18:21:30.361326    6100 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q5bdr" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:30.559470    6100 request.go:629] Waited for 197.8253ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q5bdr
	I0416 18:21:30.559470    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q5bdr
	I0416 18:21:30.559470    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:30.559470    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:30.559470    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:30.563135    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:30.563135    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:30.563135    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:30.563135    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:30 GMT
	I0416 18:21:30.563135    6100 round_trippers.go:580]     Audit-Id: aaaffc0a-e82e-40be-a3c7-bd42cc959370
	I0416 18:21:30.564059    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:30.564059    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:30.564059    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:30.564443    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q5bdr","generateName":"kube-proxy-","namespace":"kube-system","uid":"18f90e3f-dd52-44a3-918a-66181a779f58","resourceVersion":"614","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"83f1bde2-7175-4a0f-944e-61200d7e7177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83f1bde2-7175-4a0f-944e-61200d7e7177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5826 chars]
	I0416 18:21:30.762186    6100 request.go:629] Waited for 196.9037ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.83.104:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:21:30.762330    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:21:30.762466    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:30.762513    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:30.762513    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:30.766521    6100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:21:30.766521    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:30.766521    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:30.766521    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:30.766521    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:30.766521    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:30 GMT
	I0416 18:21:30.766521    6100 round_trippers.go:580]     Audit-Id: 7d08be20-9010-4ab0-a635-c801f24f84ba
	I0416 18:21:30.766521    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:30.767218    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"1253","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-mana [truncated 3831 chars]
	I0416 18:21:30.767931    6100 pod_ready.go:92] pod "kube-proxy-q5bdr" in "kube-system" namespace has status "Ready":"True"
	I0416 18:21:30.767931    6100 pod_ready.go:81] duration metric: took 406.5819ms for pod "kube-proxy-q5bdr" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:30.767931    6100 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rfxsg" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:30.949186    6100 request.go:629] Waited for 180.9542ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfxsg
	I0416 18:21:30.949443    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfxsg
	I0416 18:21:30.949443    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:30.949443    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:30.949443    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:30.953594    6100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:21:30.953594    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:30.953594    6100 round_trippers.go:580]     Audit-Id: 646374f8-6dd9-4368-9d91-3734bd9f2169
	I0416 18:21:30.953594    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:30.953594    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:30.953594    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:30.953594    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:30.953594    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:31 GMT
	I0416 18:21:30.954123    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rfxsg","generateName":"kube-proxy-","namespace":"kube-system","uid":"b740e6e0-4768-4dd4-a958-307662a92578","resourceVersion":"1410","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"83f1bde2-7175-4a0f-944e-61200d7e7177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83f1bde2-7175-4a0f-944e-61200d7e7177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6034 chars]
	I0416 18:21:31.151002    6100 request.go:629] Waited for 195.9871ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:31.151002    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:31.151002    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:31.151002    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:31.151002    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:31.156190    6100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:21:31.156287    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:31.156287    6100 round_trippers.go:580]     Audit-Id: 5060cfe5-4e82-4022-8cb6-c66802f44a56
	I0416 18:21:31.156287    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:31.156287    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:31.156287    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:31.156287    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:31.156402    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:31 GMT
	I0416 18:21:31.156715    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1387","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5372 chars]
	I0416 18:21:31.157007    6100 pod_ready.go:97] node "multinode-945500" hosting pod "kube-proxy-rfxsg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-945500" has status "Ready":"False"
	I0416 18:21:31.157007    6100 pod_ready.go:81] duration metric: took 389.0541ms for pod "kube-proxy-rfxsg" in "kube-system" namespace to be "Ready" ...
	E0416 18:21:31.157539    6100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-945500" hosting pod "kube-proxy-rfxsg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-945500" has status "Ready":"False"
	I0416 18:21:31.157539    6100 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:31.352483    6100 request.go:629] Waited for 194.7388ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-945500
	I0416 18:21:31.352658    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-945500
	I0416 18:21:31.352658    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:31.352658    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:31.352658    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:31.357523    6100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:21:31.357523    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:31.357523    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:31.357523    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:31.357523    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:31.357523    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:31 GMT
	I0416 18:21:31.357523    6100 round_trippers.go:580]     Audit-Id: 4e526a9b-21d6-4eec-9e13-ea9da79bd8c7
	I0416 18:21:31.357523    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:31.357523    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-945500","namespace":"kube-system","uid":"a09e52e8-1ac2-4c22-8a3d-57969fae85a9","resourceVersion":"1391","creationTimestamp":"2024-04-16T17:57:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4ebc73a23d79d1dece7469fd94c931d1","kubernetes.io/config.mirror":"4ebc73a23d79d1dece7469fd94c931d1","kubernetes.io/config.seen":"2024-04-16T17:57:20.694761708Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5444 chars]
	I0416 18:21:31.556678    6100 request.go:629] Waited for 197.6783ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:31.556678    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:31.556678    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:31.556678    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:31.556678    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:31.560424    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:31.560424    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:31.560424    6100 round_trippers.go:580]     Audit-Id: 11bc678b-39a9-447c-9dbf-7de32d71873f
	I0416 18:21:31.560424    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:31.561436    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:31.561436    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:31.561483    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:31.561483    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:31 GMT
	I0416 18:21:31.561841    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1387","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5372 chars]
	I0416 18:21:31.562511    6100 pod_ready.go:97] node "multinode-945500" hosting pod "kube-scheduler-multinode-945500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-945500" has status "Ready":"False"
	I0416 18:21:31.562619    6100 pod_ready.go:81] duration metric: took 405.057ms for pod "kube-scheduler-multinode-945500" in "kube-system" namespace to be "Ready" ...
	E0416 18:21:31.562619    6100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-945500" hosting pod "kube-scheduler-multinode-945500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-945500" has status "Ready":"False"
	I0416 18:21:31.562619    6100 pod_ready.go:38] duration metric: took 1.2790412s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 18:21:31.562726    6100 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 18:21:31.579389    6100 command_runner.go:130] > -16
	I0416 18:21:31.579389    6100 ops.go:34] apiserver oom_adj: -16
	I0416 18:21:31.579389    6100 kubeadm.go:591] duration metric: took 10.9763098s to restartPrimaryControlPlane
	I0416 18:21:31.579389    6100 kubeadm.go:393] duration metric: took 11.0335672s to StartCluster
	I0416 18:21:31.579389    6100 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:21:31.579389    6100 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 18:21:31.580775    6100 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:21:31.582619    6100 start.go:234] Will wait 6m0s for node &{Name: IP:172.19.83.104 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 18:21:31.582619    6100 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 18:21:31.583515    6100 out.go:177] * Enabled addons: 
	I0416 18:21:31.583347    6100 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:21:31.584058    6100 addons.go:505] duration metric: took 1.4389ms for enable addons: enabled=[]
	I0416 18:21:31.583515    6100 out.go:177] * Verifying Kubernetes components...
	I0416 18:21:31.592212    6100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:21:31.860734    6100 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 18:21:31.887581    6100 node_ready.go:35] waiting up to 6m0s for node "multinode-945500" to be "Ready" ...
	I0416 18:21:31.887793    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:31.887864    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:31.887864    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:31.887864    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:31.891761    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:31.891761    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:31.891761    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:31.891761    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:31.891761    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:32 GMT
	I0416 18:21:31.891761    6100 round_trippers.go:580]     Audit-Id: 4b28e678-231c-4674-a2ce-17b51603bcc0
	I0416 18:21:31.891761    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:31.891761    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:31.892520    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1387","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5372 chars]
	I0416 18:21:32.401801    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:32.402244    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:32.402244    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:32.402244    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:32.409822    6100 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0416 18:21:32.409822    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:32.409822    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:32.409822    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:32 GMT
	I0416 18:21:32.409822    6100 round_trippers.go:580]     Audit-Id: d8a94859-d8f5-4665-8fa6-87ee41266df6
	I0416 18:21:32.409822    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:32.409822    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:32.410477    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:32.410614    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1387","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5372 chars]
	I0416 18:21:32.901850    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:32.901937    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:32.901971    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:32.901971    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:32.908426    6100 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 18:21:32.908426    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:32.908426    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:32.908426    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:32.908426    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:32.908426    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:32.908426    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:33 GMT
	I0416 18:21:32.908426    6100 round_trippers.go:580]     Audit-Id: 9913f009-5bcf-466a-8735-95b4955ab714
	I0416 18:21:32.908426    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1387","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5372 chars]
	I0416 18:21:33.391283    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:33.391283    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:33.391283    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:33.391283    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:33.396251    6100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:21:33.396251    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:33.396251    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:33 GMT
	I0416 18:21:33.396251    6100 round_trippers.go:580]     Audit-Id: 216d57c2-e99f-413c-955c-501019c11f8d
	I0416 18:21:33.396251    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:33.396251    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:33.396251    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:33.396251    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:33.396251    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1387","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5372 chars]
	I0416 18:21:33.889817    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:33.889817    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:33.889817    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:33.889817    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:33.892580    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:33.892580    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:33.892580    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:33.892580    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:33.892580    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:33.892580    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:33.892580    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:34 GMT
	I0416 18:21:33.892580    6100 round_trippers.go:580]     Audit-Id: 6263b3d2-9ac9-46f1-9f25-a94ddbb6119c
	I0416 18:21:33.894021    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:33.894770    6100 node_ready.go:49] node "multinode-945500" has status "Ready":"True"
	I0416 18:21:33.894881    6100 node_ready.go:38] duration metric: took 2.0070897s for node "multinode-945500" to be "Ready" ...
	I0416 18:21:33.894881    6100 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 18:21:33.895060    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods
	I0416 18:21:33.895060    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:33.895060    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:33.895060    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:33.902848    6100 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0416 18:21:33.902848    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:33.902848    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:34 GMT
	I0416 18:21:33.902848    6100 round_trippers.go:580]     Audit-Id: dbb6d992-6f70-48f2-82a6-0e5d32bb5622
	I0416 18:21:33.902848    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:33.902848    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:33.902848    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:33.902848    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:33.904656    6100 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1478"},"items":[{"metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"1399","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 72676 chars]
	I0416 18:21:33.907694    6100 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-86z7h" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:33.907840    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 18:21:33.907912    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:33.907912    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:33.907912    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:33.911320    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:33.911320    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:33.911320    6100 round_trippers.go:580]     Audit-Id: 3220a1e2-d635-457b-81b1-f8894b38559f
	I0416 18:21:33.911320    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:33.911320    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:33.911320    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:33.911320    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:33.911320    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:34 GMT
	I0416 18:21:33.911320    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"1399","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0416 18:21:33.912355    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:33.912355    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:33.912355    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:33.912355    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:33.915031    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:33.915031    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:33.915031    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:33.915031    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:34 GMT
	I0416 18:21:33.915031    6100 round_trippers.go:580]     Audit-Id: 6b4144ae-725a-4980-9f4a-15a386954169
	I0416 18:21:33.915031    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:33.915031    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:33.915031    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:33.915450    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:34.417249    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 18:21:34.417374    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:34.417374    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:34.417374    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:34.421782    6100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:21:34.421782    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:34.421782    6100 round_trippers.go:580]     Audit-Id: d5d7fe68-eefc-43fe-a98f-83890e74a92a
	I0416 18:21:34.421782    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:34.421782    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:34.421782    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:34.421782    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:34.421782    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:34 GMT
	I0416 18:21:34.422806    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"1399","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0416 18:21:34.423444    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:34.423533    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:34.423533    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:34.423533    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:34.427244    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:34.427244    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:34.427244    6100 round_trippers.go:580]     Audit-Id: 4c02c3e3-84ff-48bd-9aab-90b7093918b8
	I0416 18:21:34.427244    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:34.427244    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:34.427244    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:34.427244    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:34.427244    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:34 GMT
	I0416 18:21:34.427996    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:34.916434    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 18:21:34.916434    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:34.916434    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:34.916434    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:34.921034    6100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:21:34.921034    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:34.921034    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:35 GMT
	I0416 18:21:34.921034    6100 round_trippers.go:580]     Audit-Id: 0881b3d5-2367-4d48-b656-506a6b78312e
	I0416 18:21:34.921034    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:34.921034    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:34.921034    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:34.921034    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:34.921304    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"1399","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0416 18:21:34.922030    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:34.922109    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:34.922109    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:34.922109    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:34.924843    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:34.924843    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:34.925328    6100 round_trippers.go:580]     Audit-Id: 1405c11a-6691-44f2-bec1-9cb30820a7e9
	I0416 18:21:34.925328    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:34.925328    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:34.925328    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:34.925328    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:34.925328    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:35 GMT
	I0416 18:21:34.925328    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:35.412036    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 18:21:35.412036    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:35.412036    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:35.412036    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:35.415610    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:35.416406    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:35.416406    6100 round_trippers.go:580]     Audit-Id: 22a11f50-f897-467e-ab73-4ac1ffa509dc
	I0416 18:21:35.416406    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:35.416406    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:35.416406    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:35.416543    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:35.416543    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:35 GMT
	I0416 18:21:35.416852    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"1399","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0416 18:21:35.417816    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:35.417908    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:35.417908    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:35.417908    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:35.421430    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:35.421430    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:35.421430    6100 round_trippers.go:580]     Audit-Id: dbd13b80-a695-4007-abf2-60b9594c24f1
	I0416 18:21:35.421540    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:35.421540    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:35.421540    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:35.421540    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:35.421540    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:35 GMT
	I0416 18:21:35.421943    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:35.910568    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 18:21:35.910568    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:35.910568    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:35.910568    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:35.914558    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:35.914558    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:35.915287    6100 round_trippers.go:580]     Audit-Id: 05ea3aca-fb99-4223-9ccd-5097e82f227c
	I0416 18:21:35.915287    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:35.915287    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:35.915287    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:35.915287    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:35.915287    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:36 GMT
	I0416 18:21:35.915506    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"1399","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0416 18:21:35.916187    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:35.916187    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:35.916187    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:35.916187    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:35.918772    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:35.919570    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:35.919570    6100 round_trippers.go:580]     Audit-Id: 2dd77766-3958-4b5a-ab8a-cb9c38078ffb
	I0416 18:21:35.919570    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:35.919570    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:35.919570    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:35.919570    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:35.919570    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:36 GMT
	I0416 18:21:35.919779    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:35.920080    6100 pod_ready.go:102] pod "coredns-76f75df574-86z7h" in "kube-system" namespace has status "Ready":"False"
	I0416 18:21:36.415559    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 18:21:36.415559    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:36.415559    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:36.415688    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:36.418551    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:36.418551    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:36.418551    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:36 GMT
	I0416 18:21:36.418551    6100 round_trippers.go:580]     Audit-Id: 77f9c7e1-e46b-4f0d-9c44-ba7e0aa33021
	I0416 18:21:36.418551    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:36.418551    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:36.418551    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:36.418551    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:36.419555    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"1399","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0416 18:21:36.419555    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:36.419555    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:36.419555    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:36.419555    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:36.424547    6100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:21:36.424547    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:36.424547    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:36.424547    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:36.424939    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:36 GMT
	I0416 18:21:36.424939    6100 round_trippers.go:580]     Audit-Id: ede48082-6680-4c85-b1b3-ab4a733730de
	I0416 18:21:36.424939    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:36.424939    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:36.426763    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:36.914863    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 18:21:36.914863    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:36.914863    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:36.914863    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:36.921640    6100 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 18:21:36.921640    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:36.921640    6100 round_trippers.go:580]     Audit-Id: 82613e9a-0aa0-4889-8472-cddd7ed3be27
	I0416 18:21:36.921640    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:36.921640    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:36.921640    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:36.921640    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:36.921640    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:37 GMT
	I0416 18:21:36.922498    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"1399","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0416 18:21:36.923219    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:36.923281    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:36.923281    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:36.923281    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:36.925647    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:36.926100    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:36.926100    6100 round_trippers.go:580]     Audit-Id: d5921aeb-6706-4f81-b65f-5e63d2ea2e65
	I0416 18:21:36.926100    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:36.926100    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:36.926100    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:36.926100    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:36.926100    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:37 GMT
	I0416 18:21:36.926300    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:37.411714    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 18:21:37.411790    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:37.411790    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:37.411790    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:37.414703    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:37.415086    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:37.415086    6100 round_trippers.go:580]     Audit-Id: 6c259737-b8c4-41a7-bf15-83bd23207d1b
	I0416 18:21:37.415086    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:37.415086    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:37.415086    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:37.415147    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:37.415147    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:37 GMT
	I0416 18:21:37.415147    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"1490","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6788 chars]
	I0416 18:21:37.416555    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:37.416622    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:37.416688    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:37.416688    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:37.422432    6100 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 18:21:37.422432    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:37.422432    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:37.422432    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:37 GMT
	I0416 18:21:37.422432    6100 round_trippers.go:580]     Audit-Id: 0003391b-7653-44ee-81ff-3505738c482a
	I0416 18:21:37.422432    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:37.422432    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:37.422432    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:37.422432    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:37.422432    6100 pod_ready.go:92] pod "coredns-76f75df574-86z7h" in "kube-system" namespace has status "Ready":"True"
	I0416 18:21:37.423396    6100 pod_ready.go:81] duration metric: took 3.5154309s for pod "coredns-76f75df574-86z7h" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:37.423396    6100 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:37.423396    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-945500
	I0416 18:21:37.423396    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:37.423396    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:37.423396    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:37.426400    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:37.426640    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:37.426640    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:37.426640    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:37.426640    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:37.426640    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:37.426640    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:37 GMT
	I0416 18:21:37.426640    6100 round_trippers.go:580]     Audit-Id: e7b46452-6f6f-4ee6-a2c4-19e06c76edaf
	I0416 18:21:37.426640    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-945500","namespace":"kube-system","uid":"7c7a0e73-a281-4231-95c7-479afeb4945c","resourceVersion":"1397","creationTimestamp":"2024-04-16T18:21:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.83.104:2379","kubernetes.io/config.hash":"b1890793e21da4e3dbcc47d4da1ff041","kubernetes.io/config.mirror":"b1890793e21da4e3dbcc47d4da1ff041","kubernetes.io/config.seen":"2024-04-16T18:21:23.147214167Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:21:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6384 chars]
	I0416 18:21:37.427230    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:37.427230    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:37.427230    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:37.427230    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:37.430400    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:37.430400    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:37.430400    6100 round_trippers.go:580]     Audit-Id: 9d27dd9a-ded4-4f73-b640-36105bfd0581
	I0416 18:21:37.430400    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:37.430400    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:37.430400    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:37.430400    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:37.430400    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:37 GMT
	I0416 18:21:37.430400    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:37.938595    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-945500
	I0416 18:21:37.938595    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:37.938685    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:37.938685    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:37.943248    6100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:21:37.943248    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:37.943248    6100 round_trippers.go:580]     Audit-Id: bf0d9622-f0eb-40d6-8fce-d8c3f54fb33f
	I0416 18:21:37.943248    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:37.943248    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:37.943347    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:37.943347    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:37.943347    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:38 GMT
	I0416 18:21:37.943347    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-945500","namespace":"kube-system","uid":"7c7a0e73-a281-4231-95c7-479afeb4945c","resourceVersion":"1397","creationTimestamp":"2024-04-16T18:21:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.83.104:2379","kubernetes.io/config.hash":"b1890793e21da4e3dbcc47d4da1ff041","kubernetes.io/config.mirror":"b1890793e21da4e3dbcc47d4da1ff041","kubernetes.io/config.seen":"2024-04-16T18:21:23.147214167Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:21:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6384 chars]
	I0416 18:21:37.944628    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:37.944628    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:37.944628    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:37.944628    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:37.950971    6100 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 18:21:37.950971    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:37.951586    6100 round_trippers.go:580]     Audit-Id: a16b1d95-efc0-4220-8118-d1ab05defa3c
	I0416 18:21:37.951586    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:37.951681    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:37.951681    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:37.951681    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:37.951681    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:38 GMT
	I0416 18:21:37.951880    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:38.436917    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-945500
	I0416 18:21:38.437023    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:38.437023    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:38.437023    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:38.441799    6100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:21:38.441799    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:38.441799    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:38.441799    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:38.441957    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:38.441957    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:38.441957    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:38 GMT
	I0416 18:21:38.441957    6100 round_trippers.go:580]     Audit-Id: 13740a6a-2225-486e-a48a-f7edb8c8dd4c
	I0416 18:21:38.442184    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-945500","namespace":"kube-system","uid":"7c7a0e73-a281-4231-95c7-479afeb4945c","resourceVersion":"1397","creationTimestamp":"2024-04-16T18:21:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.83.104:2379","kubernetes.io/config.hash":"b1890793e21da4e3dbcc47d4da1ff041","kubernetes.io/config.mirror":"b1890793e21da4e3dbcc47d4da1ff041","kubernetes.io/config.seen":"2024-04-16T18:21:23.147214167Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:21:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6384 chars]
	I0416 18:21:38.443146    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:38.443146    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:38.443146    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:38.443146    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:38.446653    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:38.447068    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:38.447130    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:38.447130    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:38.447130    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:38.447130    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:38 GMT
	I0416 18:21:38.447130    6100 round_trippers.go:580]     Audit-Id: 14dbe3de-0d14-47fb-af48-02fa2266f924
	I0416 18:21:38.447221    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:38.447435    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:38.935672    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-945500
	I0416 18:21:38.935672    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:38.935672    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:38.935672    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:38.939389    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:38.939389    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:38.939389    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:38.939389    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:38.939389    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:39 GMT
	I0416 18:21:38.939389    6100 round_trippers.go:580]     Audit-Id: ce5deb3e-13eb-4e59-b5ab-374116f13ac5
	I0416 18:21:38.939389    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:38.939389    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:38.940276    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-945500","namespace":"kube-system","uid":"7c7a0e73-a281-4231-95c7-479afeb4945c","resourceVersion":"1397","creationTimestamp":"2024-04-16T18:21:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.83.104:2379","kubernetes.io/config.hash":"b1890793e21da4e3dbcc47d4da1ff041","kubernetes.io/config.mirror":"b1890793e21da4e3dbcc47d4da1ff041","kubernetes.io/config.seen":"2024-04-16T18:21:23.147214167Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:21:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6384 chars]
	I0416 18:21:38.941170    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:38.941281    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:38.941281    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:38.941281    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:38.944552    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:38.944552    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:38.944552    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:38.944552    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:38.944552    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:39 GMT
	I0416 18:21:38.944552    6100 round_trippers.go:580]     Audit-Id: 72684c84-6ca6-4dd1-90b0-2bb49fe68be5
	I0416 18:21:38.944552    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:38.944552    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:38.945005    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:39.433328    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-945500
	I0416 18:21:39.433328    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:39.433582    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:39.433582    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:39.438155    6100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:21:39.438240    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:39.438240    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:39.438240    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:39.438322    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:39 GMT
	I0416 18:21:39.438355    6100 round_trippers.go:580]     Audit-Id: 8ed069fe-9db3-4d0d-8f9b-9817b042bb1d
	I0416 18:21:39.438355    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:39.438355    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:39.438355    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-945500","namespace":"kube-system","uid":"7c7a0e73-a281-4231-95c7-479afeb4945c","resourceVersion":"1397","creationTimestamp":"2024-04-16T18:21:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.83.104:2379","kubernetes.io/config.hash":"b1890793e21da4e3dbcc47d4da1ff041","kubernetes.io/config.mirror":"b1890793e21da4e3dbcc47d4da1ff041","kubernetes.io/config.seen":"2024-04-16T18:21:23.147214167Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:21:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6384 chars]
	I0416 18:21:39.439719    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:39.439719    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:39.439818    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:39.439818    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:39.443188    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:39.443188    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:39.443188    6100 round_trippers.go:580]     Audit-Id: 660d0d7d-4226-476e-9217-e5e60e717268
	I0416 18:21:39.443188    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:39.443188    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:39.443188    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:39.443188    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:39.443188    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:39 GMT
	I0416 18:21:39.444417    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:39.444771    6100 pod_ready.go:102] pod "etcd-multinode-945500" in "kube-system" namespace has status "Ready":"False"
	I0416 18:21:39.932115    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-945500
	I0416 18:21:39.932115    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:39.932115    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:39.932115    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:39.934835    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:39.935834    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:39.935834    6100 round_trippers.go:580]     Audit-Id: 46f3d1a5-6a00-4b6d-b031-4b5bfda076b1
	I0416 18:21:39.935834    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:39.935834    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:39.935834    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:39.935834    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:39.935834    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:40 GMT
	I0416 18:21:39.935970    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-945500","namespace":"kube-system","uid":"7c7a0e73-a281-4231-95c7-479afeb4945c","resourceVersion":"1499","creationTimestamp":"2024-04-16T18:21:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.83.104:2379","kubernetes.io/config.hash":"b1890793e21da4e3dbcc47d4da1ff041","kubernetes.io/config.mirror":"b1890793e21da4e3dbcc47d4da1ff041","kubernetes.io/config.seen":"2024-04-16T18:21:23.147214167Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:21:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6160 chars]
	I0416 18:21:39.936542    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:39.936542    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:39.936542    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:39.936542    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:39.943568    6100 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0416 18:21:39.943662    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:39.943688    6100 round_trippers.go:580]     Audit-Id: e1a9efe5-074b-4407-8f97-81b0f5e45ca4
	I0416 18:21:39.943688    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:39.943688    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:39.943688    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:39.943688    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:39.943688    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:40 GMT
	I0416 18:21:39.943688    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:39.944397    6100 pod_ready.go:92] pod "etcd-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 18:21:39.944397    6100 pod_ready.go:81] duration metric: took 2.5208579s for pod "etcd-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:39.944397    6100 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:39.944397    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-945500
	I0416 18:21:39.944397    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:39.944397    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:39.944397    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:39.947308    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:39.947308    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:39.947308    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:39.947308    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:39.947308    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:39.947308    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:39.947308    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:40 GMT
	I0416 18:21:39.947308    6100 round_trippers.go:580]     Audit-Id: bf8f1406-fc86-4b56-a692-fe908308325e
	I0416 18:21:39.947692    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-945500","namespace":"kube-system","uid":"249203ba-a5d5-4e35-af8e-172d64c91440","resourceVersion":"1488","creationTimestamp":"2024-04-16T18:21:29Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.83.104:8443","kubernetes.io/config.hash":"2693abda4b2acecd43625f54801b2092","kubernetes.io/config.mirror":"2693abda4b2acecd43625f54801b2092","kubernetes.io/config.seen":"2024-04-16T18:21:23.093778187Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:21:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7695 chars]
	I0416 18:21:39.947692    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:39.947692    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:39.947692    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:39.948240    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:39.950433    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:39.950433    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:39.950433    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:40 GMT
	I0416 18:21:39.950433    6100 round_trippers.go:580]     Audit-Id: 5cc856f2-b860-4b00-b6f0-f2b2c65d4463
	I0416 18:21:39.951472    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:39.951472    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:39.951472    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:39.951536    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:39.951769    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:39.952237    6100 pod_ready.go:92] pod "kube-apiserver-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 18:21:39.952237    6100 pod_ready.go:81] duration metric: took 7.8403ms for pod "kube-apiserver-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:39.952237    6100 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:39.952346    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-945500
	I0416 18:21:39.952346    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:39.952346    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:39.952394    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:39.955107    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:39.955107    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:39.955107    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:39.955107    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:39.955107    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:39.955107    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:40 GMT
	I0416 18:21:39.955107    6100 round_trippers.go:580]     Audit-Id: 26b3c2de-6b6a-40e4-816e-5a1da659023a
	I0416 18:21:39.955107    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:39.955107    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-945500","namespace":"kube-system","uid":"01b937c2-9827-4240-83f0-3536fec5eb5e","resourceVersion":"1496","creationTimestamp":"2024-04-16T17:57:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5db71de2029227779432bddd337fc81d","kubernetes.io/config.mirror":"5db71de2029227779432bddd337fc81d","kubernetes.io/config.seen":"2024-04-16T17:57:28.101473146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7470 chars]
	I0416 18:21:39.956087    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:39.956087    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:39.956087    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:39.956087    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:39.959080    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:39.959080    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:39.959080    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:40 GMT
	I0416 18:21:39.959080    6100 round_trippers.go:580]     Audit-Id: 9b9d4b5a-b0cd-4992-b3c9-35b42f392010
	I0416 18:21:39.959080    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:39.959080    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:39.959080    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:39.959080    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:39.959711    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:39.960176    6100 pod_ready.go:92] pod "kube-controller-manager-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 18:21:39.960176    6100 pod_ready.go:81] duration metric: took 7.9378ms for pod "kube-controller-manager-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:39.960176    6100 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q5bdr" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:39.960289    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q5bdr
	I0416 18:21:39.960289    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:39.960289    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:39.960289    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:39.962889    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:39.963445    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:39.963445    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:39.963445    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:40 GMT
	I0416 18:21:39.963445    6100 round_trippers.go:580]     Audit-Id: 7cde5edf-b459-4175-93fb-29a22c7f29b6
	I0416 18:21:39.963445    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:39.963445    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:39.963445    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:39.963627    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q5bdr","generateName":"kube-proxy-","namespace":"kube-system","uid":"18f90e3f-dd52-44a3-918a-66181a779f58","resourceVersion":"614","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"83f1bde2-7175-4a0f-944e-61200d7e7177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83f1bde2-7175-4a0f-944e-61200d7e7177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5826 chars]
	I0416 18:21:39.964245    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:21:39.964311    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:39.964311    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:39.964311    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:39.966213    6100 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0416 18:21:39.967060    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:39.967060    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:39.967060    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:39.967060    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:39.967060    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:40 GMT
	I0416 18:21:39.967060    6100 round_trippers.go:580]     Audit-Id: fdfe1c9b-29b1-4349-a1d2-7d45560eb224
	I0416 18:21:39.967060    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:39.967258    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"1253","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-mana [truncated 3831 chars]
	I0416 18:21:39.967651    6100 pod_ready.go:92] pod "kube-proxy-q5bdr" in "kube-system" namespace has status "Ready":"True"
	I0416 18:21:39.967651    6100 pod_ready.go:81] duration metric: took 7.4745ms for pod "kube-proxy-q5bdr" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:39.967651    6100 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rfxsg" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:39.967807    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfxsg
	I0416 18:21:39.967807    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:39.967807    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:39.967807    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:39.971020    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:39.971020    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:39.971020    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:39.971020    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:40 GMT
	I0416 18:21:39.971020    6100 round_trippers.go:580]     Audit-Id: b0d33fde-5c6d-49ca-a035-9154f49fd9c8
	I0416 18:21:39.971020    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:39.971020    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:39.971020    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:39.971604    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rfxsg","generateName":"kube-proxy-","namespace":"kube-system","uid":"b740e6e0-4768-4dd4-a958-307662a92578","resourceVersion":"1410","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"83f1bde2-7175-4a0f-944e-61200d7e7177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83f1bde2-7175-4a0f-944e-61200d7e7177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6034 chars]
	I0416 18:21:39.971781    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:39.971781    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:39.971781    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:39.971781    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:39.975513    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:39.975513    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:39.975513    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:39.975513    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:39.975513    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:39.975513    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:40 GMT
	I0416 18:21:39.975513    6100 round_trippers.go:580]     Audit-Id: 81d6b4f1-b72b-4a04-b212-c91b6a4ed4a5
	I0416 18:21:39.975513    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:39.975513    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:39.975513    6100 pod_ready.go:92] pod "kube-proxy-rfxsg" in "kube-system" namespace has status "Ready":"True"
	I0416 18:21:39.975513    6100 pod_ready.go:81] duration metric: took 7.8617ms for pod "kube-proxy-rfxsg" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:39.975513    6100 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:40.164366    6100 request.go:629] Waited for 188.8427ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-945500
	I0416 18:21:40.164570    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-945500
	I0416 18:21:40.164570    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:40.164570    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:40.164630    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:40.170107    6100 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 18:21:40.170107    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:40.170107    6100 round_trippers.go:580]     Audit-Id: dc65597b-8bad-4c5b-ba54-efb95d5a6d06
	I0416 18:21:40.170107    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:40.170600    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:40.170600    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:40.170712    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:40.170712    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:40 GMT
	I0416 18:21:40.172101    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-945500","namespace":"kube-system","uid":"a09e52e8-1ac2-4c22-8a3d-57969fae85a9","resourceVersion":"1495","creationTimestamp":"2024-04-16T17:57:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4ebc73a23d79d1dece7469fd94c931d1","kubernetes.io/config.mirror":"4ebc73a23d79d1dece7469fd94c931d1","kubernetes.io/config.seen":"2024-04-16T17:57:20.694761708Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5200 chars]
	I0416 18:21:40.351945    6100 request.go:629] Waited for 178.7261ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:40.351945    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:40.352301    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:40.352301    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:40.352364    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:40.355701    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:40.355893    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:40.356154    6100 round_trippers.go:580]     Audit-Id: fb1a54cf-8dd3-4f57-9668-350183301549
	I0416 18:21:40.356410    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:40.356410    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:40.356756    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:40.356756    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:40.356756    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:40 GMT
	I0416 18:21:40.356756    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:40.357383    6100 pod_ready.go:92] pod "kube-scheduler-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 18:21:40.357383    6100 pod_ready.go:81] duration metric: took 381.8484ms for pod "kube-scheduler-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:40.357383    6100 pod_ready.go:38] duration metric: took 6.4621349s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 18:21:40.357383    6100 api_server.go:52] waiting for apiserver process to appear ...
	I0416 18:21:40.365891    6100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 18:21:40.388252    6100 command_runner.go:130] > 1832
	I0416 18:21:40.388395    6100 api_server.go:72] duration metric: took 8.8052754s to wait for apiserver process to appear ...
	I0416 18:21:40.388490    6100 api_server.go:88] waiting for apiserver healthz status ...
	I0416 18:21:40.388490    6100 api_server.go:253] Checking apiserver healthz at https://172.19.83.104:8443/healthz ...
	I0416 18:21:40.398159    6100 api_server.go:279] https://172.19.83.104:8443/healthz returned 200:
	ok
	I0416 18:21:40.398954    6100 round_trippers.go:463] GET https://172.19.83.104:8443/version
	I0416 18:21:40.398954    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:40.398954    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:40.398954    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:40.400128    6100 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0416 18:21:40.400128    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:40.400267    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:40.400267    6100 round_trippers.go:580]     Content-Length: 263
	I0416 18:21:40.400267    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:40 GMT
	I0416 18:21:40.400267    6100 round_trippers.go:580]     Audit-Id: 29b02fa8-179e-4e10-905a-f93eba60ae66
	I0416 18:21:40.400267    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:40.400267    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:40.400267    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:40.400267    6100 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "29",
	  "gitVersion": "v1.29.3",
	  "gitCommit": "6813625b7cd706db5bc7388921be03071e1a492d",
	  "gitTreeState": "clean",
	  "buildDate": "2024-03-14T23:58:36Z",
	  "goVersion": "go1.21.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0416 18:21:40.400478    6100 api_server.go:141] control plane version: v1.29.3
	I0416 18:21:40.400514    6100 api_server.go:131] duration metric: took 12.0231ms to wait for apiserver health ...
	I0416 18:21:40.400553    6100 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 18:21:40.553114    6100 request.go:629] Waited for 152.1242ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods
	I0416 18:21:40.553114    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods
	I0416 18:21:40.553114    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:40.553114    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:40.553114    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:40.557787    6100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:21:40.557787    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:40.557787    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:40 GMT
	I0416 18:21:40.557787    6100 round_trippers.go:580]     Audit-Id: 7afe4704-a15c-4f3f-8ef1-74ca8d7c3124
	I0416 18:21:40.557787    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:40.557787    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:40.557787    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:40.557787    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:40.559510    6100 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1503"},"items":[{"metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"1490","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 71649 chars]
	I0416 18:21:40.562596    6100 system_pods.go:59] 10 kube-system pods found
	I0416 18:21:40.562674    6100 system_pods.go:61] "coredns-76f75df574-86z7h" [1ca004a0-0575-4576-a5ed-ba0891a7d277] Running
	I0416 18:21:40.562674    6100 system_pods.go:61] "etcd-multinode-945500" [7c7a0e73-a281-4231-95c7-479afeb4945c] Running
	I0416 18:21:40.562674    6100 system_pods.go:61] "kindnet-7pg6g" [b4887fd4-c2ff-40a2-ab8f-89e227151faa] Running
	I0416 18:21:40.562674    6100 system_pods.go:61] "kindnet-tp7jl" [91595b62-10ee-47cb-a0c9-2ca83ad70af7] Running
	I0416 18:21:40.562674    6100 system_pods.go:61] "kube-apiserver-multinode-945500" [249203ba-a5d5-4e35-af8e-172d64c91440] Running
	I0416 18:21:40.562674    6100 system_pods.go:61] "kube-controller-manager-multinode-945500" [01b937c2-9827-4240-83f0-3536fec5eb5e] Running
	I0416 18:21:40.562674    6100 system_pods.go:61] "kube-proxy-q5bdr" [18f90e3f-dd52-44a3-918a-66181a779f58] Running
	I0416 18:21:40.562674    6100 system_pods.go:61] "kube-proxy-rfxsg" [b740e6e0-4768-4dd4-a958-307662a92578] Running
	I0416 18:21:40.562674    6100 system_pods.go:61] "kube-scheduler-multinode-945500" [a09e52e8-1ac2-4c22-8a3d-57969fae85a9] Running
	I0416 18:21:40.562674    6100 system_pods.go:61] "storage-provisioner" [3bd5cc95-eef6-473e-b6f9-898568046f1b] Running
	I0416 18:21:40.562674    6100 system_pods.go:74] duration metric: took 162.112ms to wait for pod list to return data ...
	I0416 18:21:40.562674    6100 default_sa.go:34] waiting for default service account to be created ...
	I0416 18:21:40.755578    6100 request.go:629] Waited for 192.6589ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.83.104:8443/api/v1/namespaces/default/serviceaccounts
	I0416 18:21:40.755835    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/default/serviceaccounts
	I0416 18:21:40.755931    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:40.755931    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:40.755931    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:40.759116    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:40.759821    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:40.759821    6100 round_trippers.go:580]     Content-Length: 262
	I0416 18:21:40.759821    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:40 GMT
	I0416 18:21:40.759915    6100 round_trippers.go:580]     Audit-Id: 4233f8fe-ea2b-49ab-bcca-af631fea79fd
	I0416 18:21:40.759915    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:40.759915    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:40.759915    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:40.759915    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:40.759915    6100 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1503"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"26260d2a-9800-4f2e-87ba-a34049d52e3f","resourceVersion":"332","creationTimestamp":"2024-04-16T17:57:40Z"}}]}
	I0416 18:21:40.760359    6100 default_sa.go:45] found service account: "default"
	I0416 18:21:40.760452    6100 default_sa.go:55] duration metric: took 197.6791ms for default service account to be created ...
	I0416 18:21:40.760452    6100 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 18:21:40.956893    6100 request.go:629] Waited for 196.3186ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods
	I0416 18:21:40.957238    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods
	I0416 18:21:40.957238    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:40.957238    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:40.957238    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:40.965417    6100 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0416 18:21:40.965417    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:40.965417    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:41 GMT
	I0416 18:21:40.965586    6100 round_trippers.go:580]     Audit-Id: 9b21ae42-d638-4a7c-a7df-cff709a98ea0
	I0416 18:21:40.965586    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:40.965586    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:40.965586    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:40.965586    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:40.966433    6100 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1503"},"items":[{"metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"1490","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 71649 chars]
	I0416 18:21:40.971169    6100 system_pods.go:86] 10 kube-system pods found
	I0416 18:21:40.971712    6100 system_pods.go:89] "coredns-76f75df574-86z7h" [1ca004a0-0575-4576-a5ed-ba0891a7d277] Running
	I0416 18:21:40.971712    6100 system_pods.go:89] "etcd-multinode-945500" [7c7a0e73-a281-4231-95c7-479afeb4945c] Running
	I0416 18:21:40.971712    6100 system_pods.go:89] "kindnet-7pg6g" [b4887fd4-c2ff-40a2-ab8f-89e227151faa] Running
	I0416 18:21:40.971712    6100 system_pods.go:89] "kindnet-tp7jl" [91595b62-10ee-47cb-a0c9-2ca83ad70af7] Running
	I0416 18:21:40.971843    6100 system_pods.go:89] "kube-apiserver-multinode-945500" [249203ba-a5d5-4e35-af8e-172d64c91440] Running
	I0416 18:21:40.971843    6100 system_pods.go:89] "kube-controller-manager-multinode-945500" [01b937c2-9827-4240-83f0-3536fec5eb5e] Running
	I0416 18:21:40.971903    6100 system_pods.go:89] "kube-proxy-q5bdr" [18f90e3f-dd52-44a3-918a-66181a779f58] Running
	I0416 18:21:40.971903    6100 system_pods.go:89] "kube-proxy-rfxsg" [b740e6e0-4768-4dd4-a958-307662a92578] Running
	I0416 18:21:40.971903    6100 system_pods.go:89] "kube-scheduler-multinode-945500" [a09e52e8-1ac2-4c22-8a3d-57969fae85a9] Running
	I0416 18:21:40.971903    6100 system_pods.go:89] "storage-provisioner" [3bd5cc95-eef6-473e-b6f9-898568046f1b] Running
	I0416 18:21:40.971968    6100 system_pods.go:126] duration metric: took 211.5043ms to wait for k8s-apps to be running ...
	I0416 18:21:40.971968    6100 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 18:21:40.981798    6100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 18:21:41.008878    6100 system_svc.go:56] duration metric: took 36.9079ms WaitForService to wait for kubelet
	I0416 18:21:41.008997    6100 kubeadm.go:576] duration metric: took 9.4258421s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 18:21:41.008997    6100 node_conditions.go:102] verifying NodePressure condition ...
	I0416 18:21:41.160620    6100 request.go:629] Waited for 151.3974ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.83.104:8443/api/v1/nodes
	I0416 18:21:41.160740    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes
	I0416 18:21:41.160740    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:41.160740    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:41.160740    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:41.164643    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:41.164643    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:41.164643    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:41.164643    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:41.164643    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:41.164643    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:41 GMT
	I0416 18:21:41.164643    6100 round_trippers.go:580]     Audit-Id: dbc145b1-a726-4a63-9c8a-a3bf75497182
	I0416 18:21:41.164643    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:41.165830    6100 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1503"},"items":[{"metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"ma
nagedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v [truncated 10122 chars]
	I0416 18:21:41.166716    6100 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 18:21:41.166798    6100 node_conditions.go:123] node cpu capacity is 2
	I0416 18:21:41.166798    6100 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 18:21:41.166881    6100 node_conditions.go:123] node cpu capacity is 2
	I0416 18:21:41.166881    6100 node_conditions.go:105] duration metric: took 157.8756ms to run NodePressure ...
	I0416 18:21:41.166881    6100 start.go:240] waiting for startup goroutines ...
	I0416 18:21:41.166881    6100 start.go:245] waiting for cluster config update ...
	I0416 18:21:41.166967    6100 start.go:254] writing updated cluster config ...
	I0416 18:21:41.168822    6100 out.go:177] 
	I0416 18:21:41.180869    6100 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:21:41.180869    6100 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 18:21:41.183873    6100 out.go:177] * Starting "multinode-945500-m02" worker node in "multinode-945500" cluster
	I0416 18:21:41.183981    6100 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 18:21:41.183981    6100 cache.go:56] Caching tarball of preloaded images
	I0416 18:21:41.184538    6100 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 18:21:41.184538    6100 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 18:21:41.184538    6100 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 18:21:41.186224    6100 start.go:360] acquireMachinesLock for multinode-945500-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 18:21:41.186606    6100 start.go:364] duration metric: took 381.2µs to acquireMachinesLock for "multinode-945500-m02"
	I0416 18:21:41.186606    6100 start.go:96] Skipping create...Using existing machine configuration
	I0416 18:21:41.186606    6100 fix.go:54] fixHost starting: m02
	I0416 18:21:41.187265    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:21:43.122720    6100 main.go:141] libmachine: [stdout =====>] : Off
	
	I0416 18:21:43.123029    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:21:43.123029    6100 fix.go:112] recreateIfNeeded on multinode-945500-m02: state=Stopped err=<nil>
	W0416 18:21:43.123029    6100 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 18:21:43.123676    6100 out.go:177] * Restarting existing hyperv VM for "multinode-945500-m02" ...
	I0416 18:21:43.123676    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-945500-m02
	I0416 18:21:45.748144    6100 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:21:45.748144    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:21:45.748144    6100 main.go:141] libmachine: Waiting for host to start...
	I0416 18:21:45.748144    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:21:47.860568    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:21:47.860720    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:21:47.860882    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:21:50.102037    6100 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:21:50.102078    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:21:51.112105    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:21:53.127238    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:21:53.127730    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:21:53.127793    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:21:55.401118    6100 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:21:55.401306    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:21:56.414887    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:21:58.409960    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:21:58.409960    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:21:58.410889    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:22:00.689836    6100 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:22:00.690782    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:01.700444    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:22:03.673705    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:22:03.673705    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:03.674543    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:22:05.957195    6100 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:22:05.957382    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:06.959242    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:22:08.966475    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:22:08.966475    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:08.966475    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:22:11.348413    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:22:11.348719    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:11.351698    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:22:13.328503    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:22:13.328503    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:13.328503    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:22:15.603692    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:22:15.603692    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:15.604348    6100 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 18:22:15.606274    6100 machine.go:94] provisionDockerMachine start ...
	I0416 18:22:15.606274    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:22:17.566084    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:22:17.566084    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:17.566355    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:22:19.898403    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:22:19.899352    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:19.905720    6100 main.go:141] libmachine: Using SSH client type: native
	I0416 18:22:19.906322    6100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.85.190 22 <nil> <nil>}
	I0416 18:22:19.906322    6100 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 18:22:20.042650    6100 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 18:22:20.042650    6100 buildroot.go:166] provisioning hostname "multinode-945500-m02"
	I0416 18:22:20.042650    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:22:21.954446    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:22:21.955274    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:21.955274    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:22:24.274342    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:22:24.275253    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:24.279122    6100 main.go:141] libmachine: Using SSH client type: native
	I0416 18:22:24.279191    6100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.85.190 22 <nil> <nil>}
	I0416 18:22:24.279191    6100 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-945500-m02 && echo "multinode-945500-m02" | sudo tee /etc/hostname
	I0416 18:22:24.439073    6100 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-945500-m02
	
	I0416 18:22:24.439073    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:22:26.366612    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:22:26.367251    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:26.367310    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:22:28.659724    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:22:28.659724    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:28.664426    6100 main.go:141] libmachine: Using SSH client type: native
	I0416 18:22:28.664502    6100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.85.190 22 <nil> <nil>}
	I0416 18:22:28.664502    6100 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-945500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-945500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-945500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 18:22:28.805396    6100 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 18:22:28.805396    6100 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 18:22:28.805473    6100 buildroot.go:174] setting up certificates
	I0416 18:22:28.805473    6100 provision.go:84] configureAuth start
	I0416 18:22:28.805578    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:22:30.740898    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:22:30.740898    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:30.741442    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:22:33.046580    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:22:33.047578    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:33.047672    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:22:34.967762    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:22:34.967762    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:34.967762    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:22:37.247308    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:22:37.247491    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:37.247491    6100 provision.go:143] copyHostCerts
	I0416 18:22:37.247647    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 18:22:37.247864    6100 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 18:22:37.247864    6100 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 18:22:37.248291    6100 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 18:22:37.248730    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 18:22:37.248730    6100 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 18:22:37.248730    6100 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 18:22:37.249295    6100 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 18:22:37.250155    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 18:22:37.250221    6100 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 18:22:37.250221    6100 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 18:22:37.250221    6100 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 18:22:37.250817    6100 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-945500-m02 san=[127.0.0.1 172.19.85.190 localhost minikube multinode-945500-m02]
	I0416 18:22:37.362535    6100 provision.go:177] copyRemoteCerts
	I0416 18:22:37.372314    6100 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 18:22:37.372314    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:22:39.281788    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:22:39.281836    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:39.281836    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:22:41.591349    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:22:41.591349    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:41.591871    6100 sshutil.go:53] new ssh client: &{IP:172.19.85.190 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\id_rsa Username:docker}
	I0416 18:22:41.697556    6100 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3249959s)
	I0416 18:22:41.698576    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 18:22:41.698576    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 18:22:41.744601    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 18:22:41.745250    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0416 18:22:41.790739    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 18:22:41.790917    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 18:22:41.837836    6100 provision.go:87] duration metric: took 13.0315585s to configureAuth
	I0416 18:22:41.837836    6100 buildroot.go:189] setting minikube options for container-runtime
	I0416 18:22:41.839458    6100 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:22:41.839610    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:22:43.795958    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:22:43.795958    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:43.796720    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:22:46.098211    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:22:46.098322    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:46.104009    6100 main.go:141] libmachine: Using SSH client type: native
	I0416 18:22:46.104009    6100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.85.190 22 <nil> <nil>}
	I0416 18:22:46.104533    6100 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 18:22:46.233949    6100 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 18:22:46.234075    6100 buildroot.go:70] root file system type: tmpfs
	I0416 18:22:46.234212    6100 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 18:22:46.234297    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:22:48.196256    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:22:48.196311    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:48.196311    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:22:50.533367    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:22:50.533940    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:50.538400    6100 main.go:141] libmachine: Using SSH client type: native
	I0416 18:22:50.539010    6100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.85.190 22 <nil> <nil>}
	I0416 18:22:50.539010    6100 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.83.104"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 18:22:50.694879    6100 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.83.104
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 18:22:50.695011    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:22:52.641130    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:22:52.641130    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:52.641203    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:22:54.978009    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:22:54.978009    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:54.981917    6100 main.go:141] libmachine: Using SSH client type: native
	I0416 18:22:54.982440    6100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.85.190 22 <nil> <nil>}
	I0416 18:22:54.982440    6100 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 18:22:57.116758    6100 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 18:22:57.116856    6100 machine.go:97] duration metric: took 41.5082239s to provisionDockerMachine
	I0416 18:22:57.116856    6100 start.go:293] postStartSetup for "multinode-945500-m02" (driver="hyperv")
	I0416 18:22:57.116856    6100 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 18:22:57.129239    6100 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 18:22:57.129239    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:22:59.050319    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:22:59.050319    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:59.050539    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:23:01.324036    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:23:01.324036    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:23:01.324616    6100 sshutil.go:53] new ssh client: &{IP:172.19.85.190 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\id_rsa Username:docker}
	I0416 18:23:01.435971    6100 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3064877s)
	I0416 18:23:01.445670    6100 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 18:23:01.453013    6100 command_runner.go:130] > NAME=Buildroot
	I0416 18:23:01.453013    6100 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0416 18:23:01.453076    6100 command_runner.go:130] > ID=buildroot
	I0416 18:23:01.453076    6100 command_runner.go:130] > VERSION_ID=2023.02.9
	I0416 18:23:01.453076    6100 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0416 18:23:01.453112    6100 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 18:23:01.453112    6100 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 18:23:01.453785    6100 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 18:23:01.454945    6100 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 18:23:01.455022    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 18:23:01.465558    6100 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 18:23:01.484368    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 18:23:01.531205    6100 start.go:296] duration metric: took 4.4140988s for postStartSetup
	I0416 18:23:01.531205    6100 fix.go:56] duration metric: took 1m20.3400362s for fixHost
	I0416 18:23:01.531205    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:23:03.466236    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:23:03.466236    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:23:03.466466    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:23:05.859855    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:23:05.859855    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:23:05.865191    6100 main.go:141] libmachine: Using SSH client type: native
	I0416 18:23:05.865801    6100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.85.190 22 <nil> <nil>}
	I0416 18:23:05.865801    6100 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0416 18:23:06.005331    6100 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713291786.170114689
	
	I0416 18:23:06.005882    6100 fix.go:216] guest clock: 1713291786.170114689
	I0416 18:23:06.005882    6100 fix.go:229] Guest: 2024-04-16 18:23:06.170114689 +0000 UTC Remote: 2024-04-16 18:23:01.5312057 +0000 UTC m=+211.940753701 (delta=4.638908989s)
	I0416 18:23:06.005989    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:23:07.994063    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:23:07.994063    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:23:07.994142    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:23:10.359587    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:23:10.359587    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:23:10.364333    6100 main.go:141] libmachine: Using SSH client type: native
	I0416 18:23:10.364554    6100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.85.190 22 <nil> <nil>}
	I0416 18:23:10.364554    6100 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713291786
	I0416 18:23:10.518299    6100 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 18:23:06 UTC 2024
	
	I0416 18:23:10.518299    6100 fix.go:236] clock set: Tue Apr 16 18:23:06 UTC 2024
	 (err=<nil>)
	I0416 18:23:10.518299    6100 start.go:83] releasing machines lock for "multinode-945500-m02", held for 1m29.3266195s
	I0416 18:23:10.518689    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:23:12.533561    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:23:12.533561    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:23:12.533561    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:23:14.886419    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:23:14.886419    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:23:14.887063    6100 out.go:177] * Found network options:
	I0416 18:23:14.888107    6100 out.go:177]   - NO_PROXY=172.19.83.104
	W0416 18:23:14.888536    6100 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 18:23:14.889220    6100 out.go:177]   - NO_PROXY=172.19.83.104
	W0416 18:23:14.889643    6100 proxy.go:119] fail to check proxy env: Error ip not in block
	W0416 18:23:14.891355    6100 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 18:23:14.893485    6100 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 18:23:14.893607    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:23:14.903958    6100 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0416 18:23:14.903958    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:23:16.849662    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:23:16.849662    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:23:16.849986    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:23:16.863999    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:23:16.863999    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:23:16.864105    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:23:19.223225    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:23:19.223274    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:23:19.223677    6100 sshutil.go:53] new ssh client: &{IP:172.19.85.190 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\id_rsa Username:docker}
	I0416 18:23:19.247541    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:23:19.248529    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:23:19.248849    6100 sshutil.go:53] new ssh client: &{IP:172.19.85.190 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\id_rsa Username:docker}
	I0416 18:23:19.325688    6100 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0416 18:23:19.326693    6100 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.422412s)
	W0416 18:23:19.326776    6100 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 18:23:19.337461    6100 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 18:23:19.452882    6100 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0416 18:23:19.452882    6100 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0416 18:23:19.452882    6100 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 18:23:19.452882    6100 start.go:494] detecting cgroup driver to use...
	I0416 18:23:19.452882    6100 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5591378s)
	I0416 18:23:19.452882    6100 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 18:23:19.486422    6100 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0416 18:23:19.497666    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 18:23:19.524419    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 18:23:19.544059    6100 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 18:23:19.554792    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 18:23:19.586149    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 18:23:19.616115    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 18:23:19.645703    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 18:23:19.676168    6100 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 18:23:19.702038    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 18:23:19.729888    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 18:23:19.756567    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 18:23:19.789461    6100 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 18:23:19.807795    6100 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0416 18:23:19.819941    6100 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 18:23:19.849051    6100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:23:20.054511    6100 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 18:23:20.086480    6100 start.go:494] detecting cgroup driver to use...
	I0416 18:23:20.097132    6100 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 18:23:20.116134    6100 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0416 18:23:20.117134    6100 command_runner.go:130] > [Unit]
	I0416 18:23:20.117134    6100 command_runner.go:130] > Description=Docker Application Container Engine
	I0416 18:23:20.117600    6100 command_runner.go:130] > Documentation=https://docs.docker.com
	I0416 18:23:20.117600    6100 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0416 18:23:20.117600    6100 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0416 18:23:20.117600    6100 command_runner.go:130] > StartLimitBurst=3
	I0416 18:23:20.117600    6100 command_runner.go:130] > StartLimitIntervalSec=60
	I0416 18:23:20.117660    6100 command_runner.go:130] > [Service]
	I0416 18:23:20.117660    6100 command_runner.go:130] > Type=notify
	I0416 18:23:20.117660    6100 command_runner.go:130] > Restart=on-failure
	I0416 18:23:20.117660    6100 command_runner.go:130] > Environment=NO_PROXY=172.19.83.104
	I0416 18:23:20.117660    6100 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0416 18:23:20.117660    6100 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0416 18:23:20.117660    6100 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0416 18:23:20.117789    6100 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0416 18:23:20.117789    6100 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0416 18:23:20.117824    6100 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0416 18:23:20.117848    6100 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0416 18:23:20.117848    6100 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0416 18:23:20.117891    6100 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0416 18:23:20.117891    6100 command_runner.go:130] > ExecStart=
	I0416 18:23:20.117932    6100 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0416 18:23:20.117968    6100 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0416 18:23:20.118002    6100 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0416 18:23:20.118002    6100 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0416 18:23:20.118002    6100 command_runner.go:130] > LimitNOFILE=infinity
	I0416 18:23:20.118002    6100 command_runner.go:130] > LimitNPROC=infinity
	I0416 18:23:20.118002    6100 command_runner.go:130] > LimitCORE=infinity
	I0416 18:23:20.118002    6100 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0416 18:23:20.118002    6100 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0416 18:23:20.118002    6100 command_runner.go:130] > TasksMax=infinity
	I0416 18:23:20.118002    6100 command_runner.go:130] > TimeoutStartSec=0
	I0416 18:23:20.118002    6100 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0416 18:23:20.118002    6100 command_runner.go:130] > Delegate=yes
	I0416 18:23:20.118002    6100 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0416 18:23:20.118002    6100 command_runner.go:130] > KillMode=process
	I0416 18:23:20.118002    6100 command_runner.go:130] > [Install]
	I0416 18:23:20.118002    6100 command_runner.go:130] > WantedBy=multi-user.target
	I0416 18:23:20.127764    6100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 18:23:20.159989    6100 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 18:23:20.205206    6100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 18:23:20.239526    6100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 18:23:20.275454    6100 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 18:23:20.328999    6100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 18:23:20.352572    6100 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 18:23:20.388223    6100 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0416 18:23:20.400105    6100 ssh_runner.go:195] Run: which cri-dockerd
	I0416 18:23:20.405661    6100 command_runner.go:130] > /usr/bin/cri-dockerd
	I0416 18:23:20.413748    6100 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 18:23:20.430415    6100 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 18:23:20.470575    6100 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 18:23:20.651472    6100 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 18:23:20.825231    6100 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 18:23:20.825326    6100 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 18:23:20.866580    6100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:23:21.044087    6100 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 18:24:22.164247    6100 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0416 18:24:22.164993    6100 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0416 18:24:22.165413    6100 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1178561s)
	I0416 18:24:22.175773    6100 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 systemd[1]: Starting Docker Application Container Engine...
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:55.748792830Z" level=info msg="Starting up"
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:55.749765467Z" level=info msg="containerd not running, starting managed containerd"
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:55.755898330Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=663
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.786942701Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.814425869Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.814628598Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.814724712Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.814749115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.815566430Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.815679646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.815908578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.816028495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.816053599Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.816070001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.816633180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.817753338Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.822284176Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.822425296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0416 18:24:22.197575    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.822769044Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0416 18:24:22.197618    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.822818751Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0416 18:24:22.197652    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.823871399Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0416 18:24:22.197652    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.824045424Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.824070827Z" level=info msg="metadata content store policy set" policy=shared
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.837707647Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.837777657Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.837802060Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.837824363Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.837863669Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.837963783Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838536664Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838741993Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838856109Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838880612Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838900615Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838936320Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838957423Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838979426Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839002229Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839022032Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839041235Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839060437Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839089541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839109244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839128147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839193956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839214259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839232962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839250064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0416 18:24:22.198228    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839270167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0416 18:24:22.198267    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839298971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0416 18:24:22.198267    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839315973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0416 18:24:22.198301    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839329075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0416 18:24:22.198301    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839343777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0416 18:24:22.198357    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839357479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0416 18:24:22.198357    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839383283Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0416 18:24:22.198357    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839407386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0416 18:24:22.198357    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839420888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0416 18:24:22.198412    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839433090Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0416 18:24:22.198412    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839554107Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0416 18:24:22.198476    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839576610Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0416 18:24:22.198476    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839594613Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0416 18:24:22.198476    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839606914Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0416 18:24:22.198544    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839667723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0416 18:24:22.198544    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839763536Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0416 18:24:22.198544    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839782839Z" level=info msg="NRI interface is disabled by configuration."
	I0416 18:24:22.198544    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839994869Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0416 18:24:22.198612    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.840059878Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0416 18:24:22.198612    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.840096783Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0416 18:24:22.198612    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.840129388Z" level=info msg="containerd successfully booted in 0.056914s"
	I0416 18:24:22.198677    6100 command_runner.go:130] > Apr 16 18:22:56 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:56.795686761Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0416 18:24:22.198677    6100 command_runner.go:130] > Apr 16 18:22:56 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:56.861574880Z" level=info msg="Loading containers: start."
	I0416 18:24:22.198677    6100 command_runner.go:130] > Apr 16 18:22:57 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:57.135429298Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0416 18:24:22.198677    6100 command_runner.go:130] > Apr 16 18:22:57 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:57.209800097Z" level=info msg="Loading containers: done."
	I0416 18:24:22.198746    6100 command_runner.go:130] > Apr 16 18:22:57 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:57.235075293Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	I0416 18:24:22.198746    6100 command_runner.go:130] > Apr 16 18:22:57 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:57.235735779Z" level=info msg="Daemon has completed initialization"
	I0416 18:24:22.198746    6100 command_runner.go:130] > Apr 16 18:22:57 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:57.278880906Z" level=info msg="API listen on /var/run/docker.sock"
	I0416 18:24:22.198815    6100 command_runner.go:130] > Apr 16 18:22:57 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:57.279304261Z" level=info msg="API listen on [::]:2376"
	I0416 18:24:22.198815    6100 command_runner.go:130] > Apr 16 18:22:57 multinode-945500-m02 systemd[1]: Started Docker Application Container Engine.
	I0416 18:24:22.198815    6100 command_runner.go:130] > Apr 16 18:23:21 multinode-945500-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0416 18:24:22.198815    6100 command_runner.go:130] > Apr 16 18:23:21 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:23:21.236586796Z" level=info msg="Processing signal 'terminated'"
	I0416 18:24:22.198880    6100 command_runner.go:130] > Apr 16 18:23:21 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:23:21.238466158Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0416 18:24:22.198880    6100 command_runner.go:130] > Apr 16 18:23:21 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:23:21.238667364Z" level=info msg="Daemon shutdown complete"
	I0416 18:24:22.198880    6100 command_runner.go:130] > Apr 16 18:23:21 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:23:21.238824370Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0416 18:24:22.198880    6100 command_runner.go:130] > Apr 16 18:23:21 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:23:21.238874871Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0416 18:24:22.198950    6100 command_runner.go:130] > Apr 16 18:23:22 multinode-945500-m02 systemd[1]: docker.service: Deactivated successfully.
	I0416 18:24:22.198950    6100 command_runner.go:130] > Apr 16 18:23:22 multinode-945500-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0416 18:24:22.199015    6100 command_runner.go:130] > Apr 16 18:23:22 multinode-945500-m02 systemd[1]: Starting Docker Application Container Engine...
	I0416 18:24:22.199015    6100 command_runner.go:130] > Apr 16 18:23:22 multinode-945500-m02 dockerd[1036]: time="2024-04-16T18:23:22.306307286Z" level=info msg="Starting up"
	I0416 18:24:22.199015    6100 command_runner.go:130] > Apr 16 18:24:22 multinode-945500-m02 dockerd[1036]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0416 18:24:22.199080    6100 command_runner.go:130] > Apr 16 18:24:22 multinode-945500-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0416 18:24:22.199080    6100 command_runner.go:130] > Apr 16 18:24:22 multinode-945500-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0416 18:24:22.199080    6100 command_runner.go:130] > Apr 16 18:24:22 multinode-945500-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0416 18:24:22.205084    6100 out.go:177] 
	W0416 18:24:22.205732    6100 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 16 18:22:55 multinode-945500-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 18:22:55 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:55.748792830Z" level=info msg="Starting up"
	Apr 16 18:22:55 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:55.749765467Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 18:22:55 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:55.755898330Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=663
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.786942701Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.814425869Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.814628598Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.814724712Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.814749115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.815566430Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.815679646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.815908578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.816028495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.816053599Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.816070001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.816633180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.817753338Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.822284176Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.822425296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.822769044Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.822818751Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.823871399Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.824045424Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.824070827Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.837707647Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.837777657Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.837802060Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.837824363Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.837863669Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.837963783Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838536664Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838741993Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838856109Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838880612Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838900615Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838936320Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838957423Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838979426Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839002229Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839022032Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839041235Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839060437Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839089541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839109244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839128147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839193956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839214259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839232962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839250064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839270167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839298971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839315973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839329075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839343777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839357479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839383283Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839407386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839420888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839433090Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839554107Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839576610Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839594613Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839606914Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839667723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839763536Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839782839Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839994869Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.840059878Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.840096783Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.840129388Z" level=info msg="containerd successfully booted in 0.056914s"
	Apr 16 18:22:56 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:56.795686761Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 18:22:56 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:56.861574880Z" level=info msg="Loading containers: start."
	Apr 16 18:22:57 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:57.135429298Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 16 18:22:57 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:57.209800097Z" level=info msg="Loading containers: done."
	Apr 16 18:22:57 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:57.235075293Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 18:22:57 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:57.235735779Z" level=info msg="Daemon has completed initialization"
	Apr 16 18:22:57 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:57.278880906Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 18:22:57 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:57.279304261Z" level=info msg="API listen on [::]:2376"
	Apr 16 18:22:57 multinode-945500-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 16 18:23:21 multinode-945500-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 18:23:21 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:23:21.236586796Z" level=info msg="Processing signal 'terminated'"
	Apr 16 18:23:21 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:23:21.238466158Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 18:23:21 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:23:21.238667364Z" level=info msg="Daemon shutdown complete"
	Apr 16 18:23:21 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:23:21.238824370Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 18:23:21 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:23:21.238874871Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 18:23:22 multinode-945500-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 18:23:22 multinode-945500-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 18:23:22 multinode-945500-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 18:23:22 multinode-945500-m02 dockerd[1036]: time="2024-04-16T18:23:22.306307286Z" level=info msg="Starting up"
	Apr 16 18:24:22 multinode-945500-m02 dockerd[1036]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 16 18:24:22 multinode-945500-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 16 18:24:22 multinode-945500-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 16 18:24:22 multinode-945500-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 16 18:22:55 multinode-945500-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 18:22:55 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:55.748792830Z" level=info msg="Starting up"
	Apr 16 18:22:55 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:55.749765467Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 18:22:55 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:55.755898330Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=663
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.786942701Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.814425869Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.814628598Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.814724712Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.814749115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.815566430Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.815679646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.815908578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.816028495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.816053599Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.816070001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.816633180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.817753338Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.822284176Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.822425296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.822769044Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.822818751Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.823871399Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.824045424Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.824070827Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.837707647Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.837777657Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.837802060Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.837824363Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.837863669Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.837963783Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838536664Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838741993Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838856109Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838880612Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838900615Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838936320Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838957423Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838979426Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839002229Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839022032Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839041235Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839060437Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839089541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839109244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839128147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839193956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839214259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839232962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839250064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839270167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839298971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839315973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839329075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839343777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839357479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839383283Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839407386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839420888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839433090Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839554107Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839576610Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839594613Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839606914Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839667723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839763536Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839782839Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839994869Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.840059878Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.840096783Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.840129388Z" level=info msg="containerd successfully booted in 0.056914s"
	Apr 16 18:22:56 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:56.795686761Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 18:22:56 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:56.861574880Z" level=info msg="Loading containers: start."
	Apr 16 18:22:57 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:57.135429298Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 16 18:22:57 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:57.209800097Z" level=info msg="Loading containers: done."
	Apr 16 18:22:57 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:57.235075293Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 18:22:57 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:57.235735779Z" level=info msg="Daemon has completed initialization"
	Apr 16 18:22:57 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:57.278880906Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 18:22:57 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:57.279304261Z" level=info msg="API listen on [::]:2376"
	Apr 16 18:22:57 multinode-945500-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 16 18:23:21 multinode-945500-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 18:23:21 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:23:21.236586796Z" level=info msg="Processing signal 'terminated'"
	Apr 16 18:23:21 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:23:21.238466158Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 18:23:21 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:23:21.238667364Z" level=info msg="Daemon shutdown complete"
	Apr 16 18:23:21 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:23:21.238824370Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 18:23:21 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:23:21.238874871Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 18:23:22 multinode-945500-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 18:23:22 multinode-945500-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 18:23:22 multinode-945500-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 18:23:22 multinode-945500-m02 dockerd[1036]: time="2024-04-16T18:23:22.306307286Z" level=info msg="Starting up"
	Apr 16 18:24:22 multinode-945500-m02 dockerd[1036]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 16 18:24:22 multinode-945500-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 16 18:24:22 multinode-945500-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 16 18:24:22 multinode-945500-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0416 18:24:22.205732    6100 out.go:239] * 
	* 
	W0416 18:24:22.206665    6100 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 18:24:22.207553    6100 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-windows-amd64.exe start -p multinode-945500 --wait=true -v=8 --alsologtostderr --driver=hyperv" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-945500 -n multinode-945500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-945500 -n multinode-945500: (10.9889951s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-945500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-945500 logs -n 25: (7.6966017s)
helpers_test.go:252: TestMultiNode/serial/RestartMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                       Args                        |     Profile      |       User        |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------|-------------------|----------------|---------------------|---------------------|
	| kubectl | -p multinode-945500 -- apply -f                   | multinode-945500 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                  |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- rollout                    | multinode-945500 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | status deployment/busybox                         |                  |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- get pods -o                | multinode-945500 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- get pods -o                | multinode-945500 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-jxvx2 --                       |                  |                   |                |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-ns8nx --                       |                  |                   |                |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-jxvx2 --                       |                  |                   |                |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-ns8nx --                       |                  |                   |                |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-jxvx2 -- nslookup              |                  |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-ns8nx -- nslookup              |                  |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- get pods -o                | multinode-945500 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-jxvx2                          |                  |                   |                |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |                |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC |                     |
	|         | busybox-7fdf7869d9-jxvx2 -- sh                    |                  |                   |                |                     |                     |
	|         | -c ping -c 1 172.19.80.1                          |                  |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | busybox-7fdf7869d9-ns8nx                          |                  |                   |                |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |                |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |                |                     |                     |
	| kubectl | -p multinode-945500 -- exec                       | multinode-945500 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC |                     |
	|         | busybox-7fdf7869d9-ns8nx -- sh                    |                  |                   |                |                     |                     |
	|         | -c ping -c 1 172.19.80.1                          |                  |                   |                |                     |                     |
	| node    | add -p multinode-945500 -v 3                      | multinode-945500 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:02 UTC |                     |
	|         | --alsologtostderr                                 |                  |                   |                |                     |                     |
	| node    | multinode-945500 node stop m03                    | multinode-945500 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:07 UTC | 16 Apr 24 18:07 UTC |
	| node    | multinode-945500 node start                       | multinode-945500 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:08 UTC |                     |
	|         | m03 -v=7 --alsologtostderr                        |                  |                   |                |                     |                     |
	| node    | list -p multinode-945500                          | multinode-945500 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:12 UTC |                     |
	| stop    | -p multinode-945500                               | multinode-945500 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:12 UTC | 16 Apr 24 18:14 UTC |
	| start   | -p multinode-945500                               | multinode-945500 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:14 UTC |                     |
	|         | --wait=true -v=8                                  |                  |                   |                |                     |                     |
	|         | --alsologtostderr                                 |                  |                   |                |                     |                     |
	| node    | list -p multinode-945500                          | multinode-945500 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:17 UTC |                     |
	| node    | multinode-945500 node delete                      | multinode-945500 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:17 UTC |                     |
	|         | m03                                               |                  |                   |                |                     |                     |
	| stop    | multinode-945500 stop                             | multinode-945500 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:17 UTC | 16 Apr 24 18:19 UTC |
	| start   | -p multinode-945500                               | multinode-945500 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:19 UTC |                     |
	|         | --wait=true -v=8                                  |                  |                   |                |                     |                     |
	|         | --alsologtostderr                                 |                  |                   |                |                     |                     |
	|         | --driver=hyperv                                   |                  |                   |                |                     |                     |
	|---------|---------------------------------------------------|------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 18:19:29
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 18:19:29.713253    6100 out.go:291] Setting OutFile to fd 828 ...
	I0416 18:19:29.714291    6100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 18:19:29.714291    6100 out.go:304] Setting ErrFile to fd 884...
	I0416 18:19:29.714291    6100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 18:19:29.736940    6100 out.go:298] Setting JSON to false
	I0416 18:19:29.739598    6100 start.go:129] hostinfo: {"hostname":"minikube5","uptime":29199,"bootTime":1713262370,"procs":200,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0416 18:19:29.739598    6100 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0416 18:19:29.741462    6100 out.go:177] * [multinode-945500] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0416 18:19:29.741462    6100 notify.go:220] Checking for updates...
	I0416 18:19:29.741462    6100 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 18:19:29.743426    6100 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 18:19:29.743949    6100 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0416 18:19:29.744501    6100 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 18:19:29.745073    6100 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 18:19:29.746122    6100 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:19:29.747981    6100 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 18:19:34.540526    6100 out.go:177] * Using the hyperv driver based on existing profile
	I0416 18:19:34.540718    6100 start.go:297] selected driver: hyperv
	I0416 18:19:34.540718    6100 start.go:901] validating driver "hyperv" against &{Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.29.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.83.232 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.91.6 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.85.139 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingres
s:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 18:19:34.541390    6100 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 18:19:34.584517    6100 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 18:19:34.584517    6100 cni.go:84] Creating CNI manager for ""
	I0416 18:19:34.584517    6100 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0416 18:19:34.584517    6100 start.go:340] cluster config:
	{Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-945500 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.83.232 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.91.6 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.85.139 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:f
alse kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 18:19:34.584517    6100 iso.go:125] acquiring lock: {Name:mka3f8eef32f5becd06d05d1d837c2a92a8fa70c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 18:19:34.585999    6100 out.go:177] * Starting "multinode-945500" primary control-plane node in "multinode-945500" cluster
	I0416 18:19:34.586606    6100 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 18:19:34.587302    6100 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0416 18:19:34.587302    6100 cache.go:56] Caching tarball of preloaded images
	I0416 18:19:34.587441    6100 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 18:19:34.587441    6100 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 18:19:34.588075    6100 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 18:19:34.589695    6100 start.go:360] acquireMachinesLock for multinode-945500: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 18:19:34.590036    6100 start.go:364] duration metric: took 341.6µs to acquireMachinesLock for "multinode-945500"
	I0416 18:19:34.590036    6100 start.go:96] Skipping create...Using existing machine configuration
	I0416 18:19:34.590036    6100 fix.go:54] fixHost starting: 
	I0416 18:19:34.590734    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:19:37.035867    6100 main.go:141] libmachine: [stdout =====>] : Off
	
	I0416 18:19:37.035867    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:19:37.036414    6100 fix.go:112] recreateIfNeeded on multinode-945500: state=Stopped err=<nil>
	W0416 18:19:37.036443    6100 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 18:19:37.037461    6100 out.go:177] * Restarting existing hyperv VM for "multinode-945500" ...
	I0416 18:19:37.038370    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-945500
	I0416 18:19:39.684634    6100 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:19:39.684634    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:19:39.684634    6100 main.go:141] libmachine: Waiting for host to start...
	I0416 18:19:39.684634    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:19:41.686593    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:19:41.686680    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:19:41.686680    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:19:43.975342    6100 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:19:43.975342    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:19:44.978533    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:19:47.010715    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:19:47.010715    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:19:47.010812    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:19:49.319391    6100 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:19:49.319391    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:19:50.321898    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:19:52.351754    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:19:52.351754    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:19:52.352018    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:19:54.664580    6100 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:19:54.664809    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:19:55.678807    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:19:57.651910    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:19:57.651910    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:19:57.651910    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:19:59.899268    6100 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:19:59.899268    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:00.906345    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:20:02.927492    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:20:02.927492    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:02.928435    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:20:05.322020    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:20:05.322020    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:05.324180    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:20:07.277382    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:20:07.277382    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:07.277382    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:20:09.610017    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:20:09.610066    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:09.610066    6100 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 18:20:09.612167    6100 machine.go:94] provisionDockerMachine start ...
	I0416 18:20:09.612232    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:20:11.583873    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:20:11.583873    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:11.583873    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:20:13.902027    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:20:13.902027    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:13.905785    6100 main.go:141] libmachine: Using SSH client type: native
	I0416 18:20:13.906582    6100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.83.104 22 <nil> <nil>}
	I0416 18:20:13.906582    6100 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 18:20:14.037561    6100 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 18:20:14.037758    6100 buildroot.go:166] provisioning hostname "multinode-945500"
	I0416 18:20:14.037758    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:20:15.886948    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:20:15.888002    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:15.888031    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:20:18.221981    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:20:18.221981    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:18.227529    6100 main.go:141] libmachine: Using SSH client type: native
	I0416 18:20:18.228140    6100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.83.104 22 <nil> <nil>}
	I0416 18:20:18.228140    6100 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-945500 && echo "multinode-945500" | sudo tee /etc/hostname
	I0416 18:20:18.392067    6100 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-945500
	
	I0416 18:20:18.392067    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:20:20.345161    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:20:20.345161    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:20.345404    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:20:22.575320    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:20:22.575320    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:22.579474    6100 main.go:141] libmachine: Using SSH client type: native
	I0416 18:20:22.579885    6100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.83.104 22 <nil> <nil>}
	I0416 18:20:22.579885    6100 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-945500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-945500/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-945500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 18:20:22.731877    6100 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 18:20:22.732031    6100 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 18:20:22.732107    6100 buildroot.go:174] setting up certificates
	I0416 18:20:22.732107    6100 provision.go:84] configureAuth start
	I0416 18:20:22.732199    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:20:24.704769    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:20:24.705086    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:24.705274    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:20:27.027891    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:20:27.027891    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:27.028603    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:20:28.944328    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:20:28.944513    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:28.944513    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:20:31.236918    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:20:31.237060    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:31.237060    6100 provision.go:143] copyHostCerts
	I0416 18:20:31.237060    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 18:20:31.237060    6100 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 18:20:31.237060    6100 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 18:20:31.237732    6100 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 18:20:31.238878    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 18:20:31.238934    6100 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 18:20:31.238934    6100 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 18:20:31.238934    6100 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 18:20:31.240190    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 18:20:31.240190    6100 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 18:20:31.240190    6100 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 18:20:31.240190    6100 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 18:20:31.240878    6100 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-945500 san=[127.0.0.1 172.19.83.104 localhost minikube multinode-945500]
	I0416 18:20:31.794591    6100 provision.go:177] copyRemoteCerts
	I0416 18:20:31.802576    6100 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 18:20:31.802576    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:20:33.710432    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:20:33.711149    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:33.711149    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:20:36.003295    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:20:36.003295    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:36.004027    6100 sshutil.go:53] new ssh client: &{IP:172.19.83.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 18:20:36.114016    6100 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3109358s)
	I0416 18:20:36.114106    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 18:20:36.114670    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 18:20:36.154557    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 18:20:36.155367    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 18:20:36.195293    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 18:20:36.195512    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0416 18:20:36.234818    6100 provision.go:87] duration metric: took 13.5019442s to configureAuth
	I0416 18:20:36.234818    6100 buildroot.go:189] setting minikube options for container-runtime
	I0416 18:20:36.235561    6100 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:20:36.235647    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:20:38.121086    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:20:38.121086    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:38.121351    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:20:40.392545    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:20:40.392545    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:40.397568    6100 main.go:141] libmachine: Using SSH client type: native
	I0416 18:20:40.398089    6100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.83.104 22 <nil> <nil>}
	I0416 18:20:40.398089    6100 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 18:20:40.543936    6100 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 18:20:40.544045    6100 buildroot.go:70] root file system type: tmpfs
	I0416 18:20:40.544176    6100 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 18:20:40.544295    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:20:42.429691    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:20:42.429691    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:42.429773    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:20:44.646262    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:20:44.646262    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:44.650984    6100 main.go:141] libmachine: Using SSH client type: native
	I0416 18:20:44.650984    6100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.83.104 22 <nil> <nil>}
	I0416 18:20:44.650984    6100 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 18:20:44.816673    6100 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 18:20:44.816673    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:20:46.731043    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:20:46.732029    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:46.732101    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:20:49.051075    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:20:49.051075    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:49.057279    6100 main.go:141] libmachine: Using SSH client type: native
	I0416 18:20:49.057888    6100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.83.104 22 <nil> <nil>}
	I0416 18:20:49.057888    6100 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 18:20:51.294265    6100 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 18:20:51.294265    6100 machine.go:97] duration metric: took 41.6797306s to provisionDockerMachine
	I0416 18:20:51.294265    6100 start.go:293] postStartSetup for "multinode-945500" (driver="hyperv")
	I0416 18:20:51.294265    6100 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 18:20:51.305946    6100 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 18:20:51.305946    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:20:53.261389    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:20:53.262349    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:53.262515    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:20:55.558863    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:20:55.559709    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:55.560067    6100 sshutil.go:53] new ssh client: &{IP:172.19.83.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 18:20:55.677259    6100 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3710643s)
	I0416 18:20:55.687682    6100 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 18:20:55.692956    6100 command_runner.go:130] > NAME=Buildroot
	I0416 18:20:55.692956    6100 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0416 18:20:55.692956    6100 command_runner.go:130] > ID=buildroot
	I0416 18:20:55.692956    6100 command_runner.go:130] > VERSION_ID=2023.02.9
	I0416 18:20:55.692956    6100 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0416 18:20:55.694204    6100 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 18:20:55.694286    6100 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 18:20:55.694798    6100 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 18:20:55.696124    6100 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 18:20:55.696187    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 18:20:55.705933    6100 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 18:20:55.722841    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 18:20:55.766224    6100 start.go:296] duration metric: took 4.4717048s for postStartSetup
	I0416 18:20:55.766327    6100 fix.go:56] duration metric: took 1m21.1716799s for fixHost
	I0416 18:20:55.766327    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:20:57.654600    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:20:57.655594    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:57.655628    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:20:59.909578    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:20:59.909578    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:20:59.913240    6100 main.go:141] libmachine: Using SSH client type: native
	I0416 18:20:59.913877    6100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.83.104 22 <nil> <nil>}
	I0416 18:20:59.913877    6100 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 18:21:00.048276    6100 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713291660.212492767
	
	I0416 18:21:00.048276    6100 fix.go:216] guest clock: 1713291660.212492767
	I0416 18:21:00.048276    6100 fix.go:229] Guest: 2024-04-16 18:21:00.212492767 +0000 UTC Remote: 2024-04-16 18:20:55.7663274 +0000 UTC m=+86.183018801 (delta=4.446165367s)
	I0416 18:21:00.048276    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:21:01.958531    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:21:01.958531    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:21:01.958531    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:21:04.245872    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:21:04.245872    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:21:04.249936    6100 main.go:141] libmachine: Using SSH client type: native
	I0416 18:21:04.250593    6100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.83.104 22 <nil> <nil>}
	I0416 18:21:04.250688    6100 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713291660
	I0416 18:21:04.396802    6100 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 18:21:00 UTC 2024
	
	I0416 18:21:04.396802    6100 fix.go:236] clock set: Tue Apr 16 18:21:00 UTC 2024
	 (err=<nil>)
	I0416 18:21:04.396802    6100 start.go:83] releasing machines lock for "multinode-945500", held for 1m29.8016651s
	I0416 18:21:04.397497    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:21:06.391026    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:21:06.391713    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:21:06.391792    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:21:08.724007    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:21:08.724007    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:21:08.729729    6100 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 18:21:08.729810    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:21:08.745701    6100 ssh_runner.go:195] Run: cat /version.json
	I0416 18:21:08.745701    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:21:10.765717    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:21:10.765717    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:21:10.766516    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:21:10.766964    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:21:10.767082    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:21:10.767175    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:21:13.200149    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:21:13.200149    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:21:13.200479    6100 sshutil.go:53] new ssh client: &{IP:172.19.83.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 18:21:13.234421    6100 main.go:141] libmachine: [stdout =====>] : 172.19.83.104
	
	I0416 18:21:13.235339    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:21:13.235821    6100 sshutil.go:53] new ssh client: &{IP:172.19.83.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 18:21:13.431351    6100 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0416 18:21:13.431470    6100 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.7013922s)
	I0416 18:21:13.431470    6100 command_runner.go:130] > {"iso_version": "v1.33.0-1713236417-18649", "kicbase_version": "v0.0.43-1713215244-18647", "minikube_version": "v1.33.0-beta.0", "commit": "4ec1a3e88a9f3ffb3930e555284d907468ae83a6"}
	I0416 18:21:13.431635    6100 ssh_runner.go:235] Completed: cat /version.json: (4.6855024s)
	I0416 18:21:13.440877    6100 ssh_runner.go:195] Run: systemctl --version
	I0416 18:21:13.449293    6100 command_runner.go:130] > systemd 252 (252)
	I0416 18:21:13.449293    6100 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0416 18:21:13.457907    6100 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0416 18:21:13.464975    6100 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0416 18:21:13.465092    6100 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 18:21:13.475362    6100 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 18:21:13.498990    6100 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0416 18:21:13.499426    6100 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 18:21:13.499426    6100 start.go:494] detecting cgroup driver to use...
	I0416 18:21:13.499426    6100 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 18:21:13.527926    6100 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0416 18:21:13.539397    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 18:21:13.567918    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 18:21:13.586342    6100 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 18:21:13.593613    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 18:21:13.619605    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 18:21:13.647518    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 18:21:13.671517    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 18:21:13.700034    6100 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 18:21:13.729590    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 18:21:13.758931    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 18:21:13.785229    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 18:21:13.819152    6100 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 18:21:13.837863    6100 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0416 18:21:13.847027    6100 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 18:21:13.871883    6100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:21:14.059448    6100 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 18:21:14.090660    6100 start.go:494] detecting cgroup driver to use...
	I0416 18:21:14.099280    6100 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 18:21:14.124204    6100 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0416 18:21:14.124204    6100 command_runner.go:130] > [Unit]
	I0416 18:21:14.124204    6100 command_runner.go:130] > Description=Docker Application Container Engine
	I0416 18:21:14.124204    6100 command_runner.go:130] > Documentation=https://docs.docker.com
	I0416 18:21:14.124204    6100 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0416 18:21:14.124204    6100 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0416 18:21:14.124204    6100 command_runner.go:130] > StartLimitBurst=3
	I0416 18:21:14.124204    6100 command_runner.go:130] > StartLimitIntervalSec=60
	I0416 18:21:14.124204    6100 command_runner.go:130] > [Service]
	I0416 18:21:14.124204    6100 command_runner.go:130] > Type=notify
	I0416 18:21:14.124204    6100 command_runner.go:130] > Restart=on-failure
	I0416 18:21:14.124204    6100 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0416 18:21:14.124204    6100 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0416 18:21:14.124204    6100 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0416 18:21:14.124204    6100 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0416 18:21:14.124204    6100 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0416 18:21:14.124204    6100 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0416 18:21:14.124204    6100 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0416 18:21:14.124204    6100 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0416 18:21:14.124204    6100 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0416 18:21:14.124204    6100 command_runner.go:130] > ExecStart=
	I0416 18:21:14.124204    6100 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0416 18:21:14.124204    6100 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0416 18:21:14.124204    6100 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0416 18:21:14.124204    6100 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0416 18:21:14.124204    6100 command_runner.go:130] > LimitNOFILE=infinity
	I0416 18:21:14.124204    6100 command_runner.go:130] > LimitNPROC=infinity
	I0416 18:21:14.124204    6100 command_runner.go:130] > LimitCORE=infinity
	I0416 18:21:14.124204    6100 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0416 18:21:14.124204    6100 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0416 18:21:14.124204    6100 command_runner.go:130] > TasksMax=infinity
	I0416 18:21:14.124204    6100 command_runner.go:130] > TimeoutStartSec=0
	I0416 18:21:14.124204    6100 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0416 18:21:14.124204    6100 command_runner.go:130] > Delegate=yes
	I0416 18:21:14.124204    6100 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0416 18:21:14.124204    6100 command_runner.go:130] > KillMode=process
	I0416 18:21:14.124204    6100 command_runner.go:130] > [Install]
	I0416 18:21:14.125341    6100 command_runner.go:130] > WantedBy=multi-user.target
	I0416 18:21:14.137155    6100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 18:21:14.169383    6100 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 18:21:14.208199    6100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 18:21:14.238621    6100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 18:21:14.271453    6100 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 18:21:14.316438    6100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 18:21:14.338599    6100 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 18:21:14.373634    6100 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0416 18:21:14.384311    6100 ssh_runner.go:195] Run: which cri-dockerd
	I0416 18:21:14.390675    6100 command_runner.go:130] > /usr/bin/cri-dockerd
	I0416 18:21:14.402419    6100 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 18:21:14.419197    6100 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 18:21:14.463750    6100 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 18:21:14.654123    6100 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 18:21:14.834262    6100 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 18:21:14.834536    6100 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 18:21:14.872316    6100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:21:15.057607    6100 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 18:21:17.594720    6100 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5369094s)
	I0416 18:21:17.604067    6100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0416 18:21:17.639346    6100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 18:21:17.671723    6100 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0416 18:21:17.857796    6100 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0416 18:21:18.049141    6100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:21:18.235522    6100 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0416 18:21:18.277990    6100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 18:21:18.311732    6100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:21:18.473958    6100 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0416 18:21:18.571850    6100 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0416 18:21:18.584682    6100 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0416 18:21:18.595121    6100 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0416 18:21:18.595121    6100 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0416 18:21:18.595121    6100 command_runner.go:130] > Device: 0,22	Inode: 847         Links: 1
	I0416 18:21:18.595121    6100 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0416 18:21:18.595121    6100 command_runner.go:130] > Access: 2024-04-16 18:21:18.663583254 +0000
	I0416 18:21:18.595121    6100 command_runner.go:130] > Modify: 2024-04-16 18:21:18.663583254 +0000
	I0416 18:21:18.595121    6100 command_runner.go:130] > Change: 2024-04-16 18:21:18.666583320 +0000
	I0416 18:21:18.595121    6100 command_runner.go:130] >  Birth: -
	I0416 18:21:18.595121    6100 start.go:562] Will wait 60s for crictl version
	I0416 18:21:18.603830    6100 ssh_runner.go:195] Run: which crictl
	I0416 18:21:18.609112    6100 command_runner.go:130] > /usr/bin/crictl
	I0416 18:21:18.617790    6100 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 18:21:18.663486    6100 command_runner.go:130] > Version:  0.1.0
	I0416 18:21:18.663486    6100 command_runner.go:130] > RuntimeName:  docker
	I0416 18:21:18.663900    6100 command_runner.go:130] > RuntimeVersion:  26.0.1
	I0416 18:21:18.663900    6100 command_runner.go:130] > RuntimeApiVersion:  v1
	I0416 18:21:18.667396    6100 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0416 18:21:18.676387    6100 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 18:21:18.703477    6100 command_runner.go:130] > 26.0.1
	I0416 18:21:18.713969    6100 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 18:21:18.740947    6100 command_runner.go:130] > 26.0.1
	I0416 18:21:18.742951    6100 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
	I0416 18:21:18.742951    6100 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0416 18:21:18.748950    6100 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0416 18:21:18.748950    6100 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0416 18:21:18.748950    6100 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0416 18:21:18.748950    6100 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:6f:a4 Flags:up|broadcast|multicast|running}
	I0416 18:21:18.751948    6100 ip.go:210] interface addr: fe80::6b96:eca7:5afa:def5/64
	I0416 18:21:18.751948    6100 ip.go:210] interface addr: 172.19.80.1/20
	I0416 18:21:18.759949    6100 ssh_runner.go:195] Run: grep 172.19.80.1	host.minikube.internal$ /etc/hosts
	I0416 18:21:18.764976    6100 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 18:21:18.788835    6100 kubeadm.go:877] updating cluster {Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.29.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.83.104 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.91.6 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.85.139 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 18:21:18.789098    6100 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 18:21:18.795942    6100 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 18:21:18.818294    6100 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.29.3
	I0416 18:21:18.818364    6100 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.29.3
	I0416 18:21:18.818364    6100 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.29.3
	I0416 18:21:18.818364    6100 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.29.3
	I0416 18:21:18.818364    6100 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0416 18:21:18.818364    6100 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0416 18:21:18.818364    6100 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0416 18:21:18.818364    6100 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0416 18:21:18.818364    6100 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 18:21:18.818364    6100 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0416 18:21:18.818364    6100 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0416 18:21:18.818364    6100 docker.go:615] Images already preloaded, skipping extraction
	I0416 18:21:18.826242    6100 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 18:21:18.847002    6100 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.29.3
	I0416 18:21:18.847002    6100 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.29.3
	I0416 18:21:18.847002    6100 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.29.3
	I0416 18:21:18.847002    6100 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.29.3
	I0416 18:21:18.847002    6100 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0416 18:21:18.847002    6100 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0416 18:21:18.847114    6100 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0416 18:21:18.847114    6100 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0416 18:21:18.847114    6100 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 18:21:18.847114    6100 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0416 18:21:18.847362    6100 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0416 18:21:18.847362    6100 cache_images.go:84] Images are preloaded, skipping loading
	I0416 18:21:18.847362    6100 kubeadm.go:928] updating node { 172.19.83.104 8443 v1.29.3 docker true true} ...
	I0416 18:21:18.847362    6100 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-945500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.83.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 18:21:18.854031    6100 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0416 18:21:18.880193    6100 command_runner.go:130] > cgroupfs
	I0416 18:21:18.881510    6100 cni.go:84] Creating CNI manager for ""
	I0416 18:21:18.881577    6100 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0416 18:21:18.881648    6100 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 18:21:18.881714    6100 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.83.104 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-945500 NodeName:multinode-945500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.83.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.83.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 18:21:18.881972    6100 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.83.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-945500"
	  kubeletExtraArgs:
	    node-ip: 172.19.83.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.83.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 18:21:18.892131    6100 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 18:21:18.910498    6100 command_runner.go:130] > kubeadm
	I0416 18:21:18.910498    6100 command_runner.go:130] > kubectl
	I0416 18:21:18.910498    6100 command_runner.go:130] > kubelet
	I0416 18:21:18.910498    6100 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 18:21:18.921883    6100 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 18:21:18.938104    6100 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0416 18:21:18.971657    6100 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 18:21:18.998326    6100 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0416 18:21:19.040348    6100 ssh_runner.go:195] Run: grep 172.19.83.104	control-plane.minikube.internal$ /etc/hosts
	I0416 18:21:19.046898    6100 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.83.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 18:21:19.074680    6100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:21:19.246173    6100 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 18:21:19.272609    6100 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500 for IP: 172.19.83.104
	I0416 18:21:19.272723    6100 certs.go:194] generating shared ca certs ...
	I0416 18:21:19.272801    6100 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:21:19.273027    6100 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0416 18:21:19.273630    6100 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0416 18:21:19.273630    6100 certs.go:256] generating profile certs ...
	I0416 18:21:19.275057    6100 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\client.key
	I0416 18:21:19.275287    6100 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key.ee50f9d4
	I0416 18:21:19.275512    6100 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt.ee50f9d4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.83.104]
	I0416 18:21:19.618188    6100 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt.ee50f9d4 ...
	I0416 18:21:19.618188    6100 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt.ee50f9d4: {Name:mk1f72169f6e81bcfcbe83fa03b26f15975d58c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:21:19.619201    6100 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key.ee50f9d4 ...
	I0416 18:21:19.620217    6100 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key.ee50f9d4: {Name:mk7bbb58856f4723240bed121ab9ecb0a828f1b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:21:19.621324    6100 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt.ee50f9d4 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt
	I0416 18:21:19.631251    6100 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key.ee50f9d4 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key
	I0416 18:21:19.632245    6100 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.key
	I0416 18:21:19.632245    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 18:21:19.633238    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0416 18:21:19.633238    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 18:21:19.633238    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 18:21:19.633238    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0416 18:21:19.633238    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0416 18:21:19.633238    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0416 18:21:19.633238    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0416 18:21:19.633238    6100 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem (1338 bytes)
	W0416 18:21:19.634823    6100 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324_empty.pem, impossibly tiny 0 bytes
	I0416 18:21:19.634823    6100 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0416 18:21:19.634823    6100 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0416 18:21:19.635441    6100 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0416 18:21:19.635601    6100 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0416 18:21:19.635601    6100 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem (1708 bytes)
	I0416 18:21:19.636190    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem -> /usr/share/ca-certificates/5324.pem
	I0416 18:21:19.636190    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /usr/share/ca-certificates/53242.pem
	I0416 18:21:19.636190    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:21:19.637352    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 18:21:19.679344    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 18:21:19.723365    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 18:21:19.767143    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 18:21:19.809881    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0416 18:21:19.850881    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 18:21:19.893280    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 18:21:19.936699    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 18:21:19.980979    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5324.pem --> /usr/share/ca-certificates/5324.pem (1338 bytes)
	I0416 18:21:20.022061    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /usr/share/ca-certificates/53242.pem (1708 bytes)
	I0416 18:21:20.061219    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 18:21:20.098345    6100 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 18:21:20.137354    6100 ssh_runner.go:195] Run: openssl version
	I0416 18:21:20.145510    6100 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0416 18:21:20.157784    6100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5324.pem && ln -fs /usr/share/ca-certificates/5324.pem /etc/ssl/certs/5324.pem"
	I0416 18:21:20.182016    6100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5324.pem
	I0416 18:21:20.189100    6100 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 16 16:35 /usr/share/ca-certificates/5324.pem
	I0416 18:21:20.189100    6100 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:35 /usr/share/ca-certificates/5324.pem
	I0416 18:21:20.198972    6100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5324.pem
	I0416 18:21:20.206503    6100 command_runner.go:130] > 51391683
	I0416 18:21:20.214143    6100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5324.pem /etc/ssl/certs/51391683.0"
	I0416 18:21:20.240946    6100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/53242.pem && ln -fs /usr/share/ca-certificates/53242.pem /etc/ssl/certs/53242.pem"
	I0416 18:21:20.279061    6100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/53242.pem
	I0416 18:21:20.286584    6100 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 16 16:35 /usr/share/ca-certificates/53242.pem
	I0416 18:21:20.286584    6100 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:35 /usr/share/ca-certificates/53242.pem
	I0416 18:21:20.294975    6100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/53242.pem
	I0416 18:21:20.303724    6100 command_runner.go:130] > 3ec20f2e
	I0416 18:21:20.313469    6100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/53242.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 18:21:20.341996    6100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 18:21:20.367682    6100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:21:20.374174    6100 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 16 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:21:20.374174    6100 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:21:20.385312    6100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:21:20.393735    6100 command_runner.go:130] > b5213941
	I0416 18:21:20.401441    6100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 18:21:20.430085    6100 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 18:21:20.436874    6100 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 18:21:20.437565    6100 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0416 18:21:20.437565    6100 command_runner.go:130] > Device: 8,1	Inode: 9431342     Links: 1
	I0416 18:21:20.437565    6100 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0416 18:21:20.437627    6100 command_runner.go:130] > Access: 2024-04-16 17:57:16.870126444 +0000
	I0416 18:21:20.437674    6100 command_runner.go:130] > Modify: 2024-04-16 17:57:16.870126444 +0000
	I0416 18:21:20.437725    6100 command_runner.go:130] > Change: 2024-04-16 17:57:16.870126444 +0000
	I0416 18:21:20.437725    6100 command_runner.go:130] >  Birth: 2024-04-16 17:57:16.870126444 +0000
	I0416 18:21:20.446281    6100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 18:21:20.454044    6100 command_runner.go:130] > Certificate will not expire
	I0416 18:21:20.464317    6100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 18:21:20.473358    6100 command_runner.go:130] > Certificate will not expire
	I0416 18:21:20.481843    6100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 18:21:20.491216    6100 command_runner.go:130] > Certificate will not expire
	I0416 18:21:20.499222    6100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 18:21:20.507808    6100 command_runner.go:130] > Certificate will not expire
	I0416 18:21:20.516170    6100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 18:21:20.525030    6100 command_runner.go:130] > Certificate will not expire
	I0416 18:21:20.534005    6100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 18:21:20.545195    6100 command_runner.go:130] > Certificate will not expire
	I0416 18:21:20.545195    6100 kubeadm.go:391] StartCluster: {Name:multinode-945500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
9.3 ClusterName:multinode-945500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.83.104 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.91.6 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.85.139 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:
false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 18:21:20.552385    6100 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0416 18:21:20.584656    6100 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0416 18:21:20.601391    6100 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0416 18:21:20.602241    6100 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0416 18:21:20.602241    6100 command_runner.go:130] > /var/lib/minikube/etcd:
	I0416 18:21:20.602241    6100 command_runner.go:130] > member
	W0416 18:21:20.602241    6100 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 18:21:20.602406    6100 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 18:21:20.602406    6100 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 18:21:20.612829    6100 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 18:21:20.630324    6100 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 18:21:20.631336    6100 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-945500" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 18:21:20.632502    6100 kubeconfig.go:62] C:\Users\jenkins.minikube5\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-945500" cluster setting kubeconfig missing "multinode-945500" context setting]
	I0416 18:21:20.633185    6100 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:21:20.648952    6100 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 18:21:20.649835    6100 kapi.go:59] client config for multinode-945500: &rest.Config{Host:"https://172.19.83.104:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500/client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-945500/client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef16c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 18:21:20.651583    6100 cert_rotation.go:137] Starting client certificate rotation controller
	I0416 18:21:20.663021    6100 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 18:21:20.681082    6100 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0416 18:21:20.681725    6100 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0416 18:21:20.681725    6100 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0416 18:21:20.681725    6100 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0416 18:21:20.681725    6100 command_runner.go:130] >  kind: InitConfiguration
	I0416 18:21:20.681725    6100 command_runner.go:130] >  localAPIEndpoint:
	I0416 18:21:20.681725    6100 command_runner.go:130] > -  advertiseAddress: 172.19.91.227
	I0416 18:21:20.681725    6100 command_runner.go:130] > +  advertiseAddress: 172.19.83.104
	I0416 18:21:20.681725    6100 command_runner.go:130] >    bindPort: 8443
	I0416 18:21:20.681725    6100 command_runner.go:130] >  bootstrapTokens:
	I0416 18:21:20.681725    6100 command_runner.go:130] >    - groups:
	I0416 18:21:20.681725    6100 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0416 18:21:20.681725    6100 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0416 18:21:20.681725    6100 command_runner.go:130] >    name: "multinode-945500"
	I0416 18:21:20.681725    6100 command_runner.go:130] >    kubeletExtraArgs:
	I0416 18:21:20.681725    6100 command_runner.go:130] > -    node-ip: 172.19.91.227
	I0416 18:21:20.681725    6100 command_runner.go:130] > +    node-ip: 172.19.83.104
	I0416 18:21:20.681725    6100 command_runner.go:130] >    taints: []
	I0416 18:21:20.681725    6100 command_runner.go:130] >  ---
	I0416 18:21:20.681725    6100 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0416 18:21:20.681725    6100 command_runner.go:130] >  kind: ClusterConfiguration
	I0416 18:21:20.681725    6100 command_runner.go:130] >  apiServer:
	I0416 18:21:20.681725    6100 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.19.91.227"]
	I0416 18:21:20.681725    6100 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.19.83.104"]
	I0416 18:21:20.681725    6100 command_runner.go:130] >    extraArgs:
	I0416 18:21:20.681725    6100 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0416 18:21:20.681725    6100 command_runner.go:130] >  controllerManager:
	I0416 18:21:20.681725    6100 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.19.91.227
	+  advertiseAddress: 172.19.83.104
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-945500"
	   kubeletExtraArgs:
	-    node-ip: 172.19.91.227
	+    node-ip: 172.19.83.104
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.19.91.227"]
	+  certSANs: ["127.0.0.1", "localhost", "172.19.83.104"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0416 18:21:20.681725    6100 kubeadm.go:1154] stopping kube-system containers ...
	I0416 18:21:20.688408    6100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0416 18:21:20.711124    6100 command_runner.go:130] > 6ad0b1d75a1e
	I0416 18:21:20.712006    6100 command_runner.go:130] > 2b470472d009
	I0416 18:21:20.712006    6100 command_runner.go:130] > 6f233a9704ee
	I0416 18:21:20.712006    6100 command_runner.go:130] > 2ba60ece6840
	I0416 18:21:20.712006    6100 command_runner.go:130] > cd37920f1d54
	I0416 18:21:20.712006    6100 command_runner.go:130] > f56880607ce1
	I0416 18:21:20.712133    6100 command_runner.go:130] > d2cd68d7f406
	I0416 18:21:20.712133    6100 command_runner.go:130] > 68766d2b671f
	I0416 18:21:20.712133    6100 command_runner.go:130] > 736259e5d03b
	I0416 18:21:20.712133    6100 command_runner.go:130] > 4a7c8d9808b6
	I0416 18:21:20.712133    6100 command_runner.go:130] > 91288754cb0b
	I0416 18:21:20.712133    6100 command_runner.go:130] > 0cae708a3787
	I0416 18:21:20.712133    6100 command_runner.go:130] > 5f7e5b16341d
	I0416 18:21:20.712133    6100 command_runner.go:130] > ecb0ceb1a3fe
	I0416 18:21:20.712133    6100 command_runner.go:130] > b8699d93388d
	I0416 18:21:20.712243    6100 command_runner.go:130] > d28c611e0605
	I0416 18:21:20.712243    6100 docker.go:483] Stopping containers: [6ad0b1d75a1e 2b470472d009 6f233a9704ee 2ba60ece6840 cd37920f1d54 f56880607ce1 d2cd68d7f406 68766d2b671f 736259e5d03b 4a7c8d9808b6 91288754cb0b 0cae708a3787 5f7e5b16341d ecb0ceb1a3fe b8699d93388d d28c611e0605]
	I0416 18:21:20.719536    6100 ssh_runner.go:195] Run: docker stop 6ad0b1d75a1e 2b470472d009 6f233a9704ee 2ba60ece6840 cd37920f1d54 f56880607ce1 d2cd68d7f406 68766d2b671f 736259e5d03b 4a7c8d9808b6 91288754cb0b 0cae708a3787 5f7e5b16341d ecb0ceb1a3fe b8699d93388d d28c611e0605
	I0416 18:21:20.745732    6100 command_runner.go:130] > 6ad0b1d75a1e
	I0416 18:21:20.745732    6100 command_runner.go:130] > 2b470472d009
	I0416 18:21:20.745732    6100 command_runner.go:130] > 6f233a9704ee
	I0416 18:21:20.745732    6100 command_runner.go:130] > 2ba60ece6840
	I0416 18:21:20.745732    6100 command_runner.go:130] > cd37920f1d54
	I0416 18:21:20.745732    6100 command_runner.go:130] > f56880607ce1
	I0416 18:21:20.745732    6100 command_runner.go:130] > d2cd68d7f406
	I0416 18:21:20.745732    6100 command_runner.go:130] > 68766d2b671f
	I0416 18:21:20.745732    6100 command_runner.go:130] > 736259e5d03b
	I0416 18:21:20.745732    6100 command_runner.go:130] > 4a7c8d9808b6
	I0416 18:21:20.745732    6100 command_runner.go:130] > 91288754cb0b
	I0416 18:21:20.745732    6100 command_runner.go:130] > 0cae708a3787
	I0416 18:21:20.745732    6100 command_runner.go:130] > 5f7e5b16341d
	I0416 18:21:20.745732    6100 command_runner.go:130] > ecb0ceb1a3fe
	I0416 18:21:20.745732    6100 command_runner.go:130] > b8699d93388d
	I0416 18:21:20.745732    6100 command_runner.go:130] > d28c611e0605
	I0416 18:21:20.757003    6100 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 18:21:20.790178    6100 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 18:21:20.806208    6100 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0416 18:21:20.806208    6100 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0416 18:21:20.806208    6100 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0416 18:21:20.806665    6100 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 18:21:20.806791    6100 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 18:21:20.806791    6100 kubeadm.go:156] found existing configuration files:
	
	I0416 18:21:20.817877    6100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 18:21:20.833161    6100 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 18:21:20.833837    6100 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 18:21:20.841401    6100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 18:21:20.869610    6100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 18:21:20.884918    6100 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 18:21:20.885095    6100 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 18:21:20.893856    6100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 18:21:20.922902    6100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 18:21:20.937552    6100 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 18:21:20.937913    6100 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 18:21:20.946598    6100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 18:21:20.972072    6100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 18:21:20.988305    6100 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 18:21:20.988368    6100 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 18:21:20.996217    6100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 18:21:21.022601    6100 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 18:21:21.040025    6100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 18:21:21.270121    6100 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 18:21:21.270121    6100 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0416 18:21:21.270121    6100 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0416 18:21:21.270121    6100 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 18:21:21.270121    6100 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0416 18:21:21.270121    6100 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0416 18:21:21.270121    6100 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0416 18:21:21.270281    6100 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0416 18:21:21.270281    6100 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0416 18:21:21.270281    6100 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 18:21:21.270281    6100 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 18:21:21.270343    6100 command_runner.go:130] > [certs] Using the existing "sa" key
	I0416 18:21:21.270392    6100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 18:21:22.493311    6100 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 18:21:22.493311    6100 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 18:21:22.493311    6100 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 18:21:22.493311    6100 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 18:21:22.493311    6100 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 18:21:22.493311    6100 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 18:21:22.493311    6100 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.2228504s)
	I0416 18:21:22.493311    6100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 18:21:22.769800    6100 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 18:21:22.770187    6100 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 18:21:22.770187    6100 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0416 18:21:22.770247    6100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 18:21:22.865592    6100 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 18:21:22.865730    6100 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 18:21:22.865730    6100 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 18:21:22.865730    6100 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 18:21:22.865807    6100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 18:21:22.967260    6100 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 18:21:22.967260    6100 api_server.go:52] waiting for apiserver process to appear ...
	I0416 18:21:22.980139    6100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 18:21:23.496671    6100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 18:21:23.983300    6100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 18:21:24.482657    6100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 18:21:24.513872    6100 command_runner.go:130] > 1832
	I0416 18:21:24.513872    6100 api_server.go:72] duration metric: took 1.5465242s to wait for apiserver process to appear ...
	I0416 18:21:24.513872    6100 api_server.go:88] waiting for apiserver healthz status ...
	I0416 18:21:24.513872    6100 api_server.go:253] Checking apiserver healthz at https://172.19.83.104:8443/healthz ...
	I0416 18:21:27.605327    6100 api_server.go:279] https://172.19.83.104:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 18:21:27.605562    6100 api_server.go:103] status: https://172.19.83.104:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 18:21:27.605645    6100 api_server.go:253] Checking apiserver healthz at https://172.19.83.104:8443/healthz ...
	I0416 18:21:27.689316    6100 api_server.go:279] https://172.19.83.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 18:21:27.689945    6100 api_server.go:103] status: https://172.19.83.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 18:21:28.027157    6100 api_server.go:253] Checking apiserver healthz at https://172.19.83.104:8443/healthz ...
	I0416 18:21:28.035853    6100 api_server.go:279] https://172.19.83.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 18:21:28.035853    6100 api_server.go:103] status: https://172.19.83.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 18:21:28.525991    6100 api_server.go:253] Checking apiserver healthz at https://172.19.83.104:8443/healthz ...
	I0416 18:21:28.535157    6100 api_server.go:279] https://172.19.83.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 18:21:28.535447    6100 api_server.go:103] status: https://172.19.83.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 18:21:29.028437    6100 api_server.go:253] Checking apiserver healthz at https://172.19.83.104:8443/healthz ...
	I0416 18:21:29.041488    6100 api_server.go:279] https://172.19.83.104:8443/healthz returned 200:
	ok
	I0416 18:21:29.042204    6100 round_trippers.go:463] GET https://172.19.83.104:8443/version
	I0416 18:21:29.042204    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:29.042204    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:29.042204    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:29.051782    6100 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0416 18:21:29.051782    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:29.051782    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:29.051782    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:29.051782    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:29.051782    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:29.051782    6100 round_trippers.go:580]     Content-Length: 263
	I0416 18:21:29.051782    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:29 GMT
	I0416 18:21:29.051782    6100 round_trippers.go:580]     Audit-Id: 309c1c07-9def-49d0-a541-d12180c9534f
	I0416 18:21:29.051782    6100 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "29",
	  "gitVersion": "v1.29.3",
	  "gitCommit": "6813625b7cd706db5bc7388921be03071e1a492d",
	  "gitTreeState": "clean",
	  "buildDate": "2024-03-14T23:58:36Z",
	  "goVersion": "go1.21.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0416 18:21:29.051782    6100 api_server.go:141] control plane version: v1.29.3
	I0416 18:21:29.051782    6100 api_server.go:131] duration metric: took 4.5376523s to wait for apiserver health ...
	I0416 18:21:29.051782    6100 cni.go:84] Creating CNI manager for ""
	I0416 18:21:29.051782    6100 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0416 18:21:29.052809    6100 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0416 18:21:29.061782    6100 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0416 18:21:29.069810    6100 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0416 18:21:29.070339    6100 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0416 18:21:29.070339    6100 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0416 18:21:29.070339    6100 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0416 18:21:29.070433    6100 command_runner.go:130] > Access: 2024-04-16 18:20:04.000071600 +0000
	I0416 18:21:29.070433    6100 command_runner.go:130] > Modify: 2024-04-16 08:43:32.000000000 +0000
	I0416 18:21:29.070433    6100 command_runner.go:130] > Change: 2024-04-16 18:19:54.261000000 +0000
	I0416 18:21:29.070433    6100 command_runner.go:130] >  Birth: -
	I0416 18:21:29.070433    6100 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0416 18:21:29.070433    6100 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0416 18:21:29.115294    6100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0416 18:21:29.947912    6100 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0416 18:21:29.947912    6100 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0416 18:21:29.947972    6100 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0416 18:21:29.947972    6100 command_runner.go:130] > daemonset.apps/kindnet configured
	I0416 18:21:29.948026    6100 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 18:21:29.948162    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods
	I0416 18:21:29.948233    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:29.948233    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:29.948233    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:29.953281    6100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:21:29.953353    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:29.953353    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:29.953353    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:29.953353    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:29.953353    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:30 GMT
	I0416 18:21:29.953353    6100 round_trippers.go:580]     Audit-Id: b5739e7d-5af9-4993-82e3-9fd5366cc000
	I0416 18:21:29.953353    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:29.954799    6100 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1408"},"items":[{"metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"1399","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 73060 chars]
	I0416 18:21:29.959976    6100 system_pods.go:59] 10 kube-system pods found
	I0416 18:21:29.960036    6100 system_pods.go:61] "coredns-76f75df574-86z7h" [1ca004a0-0575-4576-a5ed-ba0891a7d277] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 18:21:29.960036    6100 system_pods.go:61] "etcd-multinode-945500" [7c7a0e73-a281-4231-95c7-479afeb4945c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0416 18:21:29.960097    6100 system_pods.go:61] "kindnet-7pg6g" [b4887fd4-c2ff-40a2-ab8f-89e227151faa] Running
	I0416 18:21:29.960097    6100 system_pods.go:61] "kindnet-tp7jl" [91595b62-10ee-47cb-a0c9-2ca83ad70af7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0416 18:21:29.960097    6100 system_pods.go:61] "kube-apiserver-multinode-945500" [249203ba-a5d5-4e35-af8e-172d64c91440] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0416 18:21:29.960097    6100 system_pods.go:61] "kube-controller-manager-multinode-945500" [01b937c2-9827-4240-83f0-3536fec5eb5e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0416 18:21:29.960097    6100 system_pods.go:61] "kube-proxy-q5bdr" [18f90e3f-dd52-44a3-918a-66181a779f58] Running
	I0416 18:21:29.960097    6100 system_pods.go:61] "kube-proxy-rfxsg" [b740e6e0-4768-4dd4-a958-307662a92578] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0416 18:21:29.960097    6100 system_pods.go:61] "kube-scheduler-multinode-945500" [a09e52e8-1ac2-4c22-8a3d-57969fae85a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0416 18:21:29.960097    6100 system_pods.go:61] "storage-provisioner" [3bd5cc95-eef6-473e-b6f9-898568046f1b] Running
	I0416 18:21:29.960186    6100 system_pods.go:74] duration metric: took 12.0698ms to wait for pod list to return data ...
	I0416 18:21:29.960186    6100 node_conditions.go:102] verifying NodePressure condition ...
	I0416 18:21:29.960235    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes
	I0416 18:21:29.960321    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:29.960321    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:29.960321    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:29.966003    6100 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 18:21:29.966003    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:29.966003    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:30 GMT
	I0416 18:21:29.967019    6100 round_trippers.go:580]     Audit-Id: 34b4f8b3-d2b1-43d6-92c3-479b07bd154b
	I0416 18:21:29.967019    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:29.967019    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:29.967019    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:29.967076    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:29.967181    6100 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1408"},"items":[{"metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1387","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"ma
nagedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v [truncated 10249 chars]
	I0416 18:21:29.968131    6100 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 18:21:29.968131    6100 node_conditions.go:123] node cpu capacity is 2
	I0416 18:21:29.968131    6100 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 18:21:29.968131    6100 node_conditions.go:123] node cpu capacity is 2
	I0416 18:21:29.968131    6100 node_conditions.go:105] duration metric: took 7.9449ms to run NodePressure ...
	I0416 18:21:29.968131    6100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 18:21:30.275784    6100 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0416 18:21:30.275997    6100 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0416 18:21:30.276065    6100 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0416 18:21:30.276400    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0416 18:21:30.276400    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:30.276400    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:30.276400    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:30.281498    6100 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 18:21:30.281498    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:30.281498    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:30.281498    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:30 GMT
	I0416 18:21:30.281498    6100 round_trippers.go:580]     Audit-Id: b84d62b7-6c4b-49a5-84aa-1b7b861f0277
	I0416 18:21:30.281498    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:30.281498    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:30.281498    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:30.282506    6100 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1412"},"items":[{"metadata":{"name":"etcd-multinode-945500","namespace":"kube-system","uid":"7c7a0e73-a281-4231-95c7-479afeb4945c","resourceVersion":"1397","creationTimestamp":"2024-04-16T18:21:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.83.104:2379","kubernetes.io/config.hash":"b1890793e21da4e3dbcc47d4da1ff041","kubernetes.io/config.mirror":"b1890793e21da4e3dbcc47d4da1ff041","kubernetes.io/config.seen":"2024-04-16T18:21:23.147214167Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:21:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 30532 chars]
	I0416 18:21:30.283505    6100 kubeadm.go:733] kubelet initialised
	I0416 18:21:30.283505    6100 kubeadm.go:734] duration metric: took 7.4391ms waiting for restarted kubelet to initialise ...
	I0416 18:21:30.283505    6100 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 18:21:30.283505    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods
	I0416 18:21:30.283505    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:30.283505    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:30.283505    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:30.287514    6100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:21:30.287514    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:30.287514    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:30.287514    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:30.287514    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:30.287514    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:30 GMT
	I0416 18:21:30.287514    6100 round_trippers.go:580]     Audit-Id: 58bb7d51-526d-438e-a8db-45efc3438395
	I0416 18:21:30.287514    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:30.288510    6100 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1412"},"items":[{"metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"1399","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 72676 chars]
	I0416 18:21:30.291499    6100 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-86z7h" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:30.291499    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 18:21:30.291499    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:30.291499    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:30.291499    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:30.294512    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:30.294512    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:30.294512    6100 round_trippers.go:580]     Audit-Id: 3ea8cc92-4cb4-4311-acd1-8e9fbef70dd4
	I0416 18:21:30.295281    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:30.295370    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:30.295370    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:30.295370    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:30.295422    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:30 GMT
	I0416 18:21:30.295602    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"1399","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0416 18:21:30.296420    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:30.296420    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:30.296420    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:30.296420    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:30.299460    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:30.299460    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:30.299460    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:30.299460    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:30.299460    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:30.299460    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:30.299460    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:30 GMT
	I0416 18:21:30.299460    6100 round_trippers.go:580]     Audit-Id: 87a2f897-d1fa-4256-91fd-5a9c081676ee
	I0416 18:21:30.299460    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1387","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5372 chars]
	I0416 18:21:30.300108    6100 pod_ready.go:97] node "multinode-945500" hosting pod "coredns-76f75df574-86z7h" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-945500" has status "Ready":"False"
	I0416 18:21:30.300108    6100 pod_ready.go:81] duration metric: took 8.6092ms for pod "coredns-76f75df574-86z7h" in "kube-system" namespace to be "Ready" ...
	E0416 18:21:30.300193    6100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-945500" hosting pod "coredns-76f75df574-86z7h" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-945500" has status "Ready":"False"
	I0416 18:21:30.300193    6100 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:30.300312    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-945500
	I0416 18:21:30.300312    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:30.300312    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:30.300312    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:30.303098    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:30.303098    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:30.303098    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:30.303098    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:30.303098    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:30.303098    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:30 GMT
	I0416 18:21:30.303098    6100 round_trippers.go:580]     Audit-Id: 051209a3-c3e2-4a59-af16-9942e174a927
	I0416 18:21:30.303098    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:30.303098    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-945500","namespace":"kube-system","uid":"7c7a0e73-a281-4231-95c7-479afeb4945c","resourceVersion":"1397","creationTimestamp":"2024-04-16T18:21:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.83.104:2379","kubernetes.io/config.hash":"b1890793e21da4e3dbcc47d4da1ff041","kubernetes.io/config.mirror":"b1890793e21da4e3dbcc47d4da1ff041","kubernetes.io/config.seen":"2024-04-16T18:21:23.147214167Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:21:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6384 chars]
	I0416 18:21:30.303780    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:30.303814    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:30.303814    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:30.303814    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:30.307071    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:30.307071    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:30.307071    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:30.307133    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:30.307133    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:30.307133    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:30.307133    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:30 GMT
	I0416 18:21:30.307133    6100 round_trippers.go:580]     Audit-Id: 4a8e0bcf-a76d-466a-bbf6-903f4b7d36db
	I0416 18:21:30.307133    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1387","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5372 chars]
	I0416 18:21:30.307133    6100 pod_ready.go:97] node "multinode-945500" hosting pod "etcd-multinode-945500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-945500" has status "Ready":"False"
	I0416 18:21:30.307667    6100 pod_ready.go:81] duration metric: took 7.4739ms for pod "etcd-multinode-945500" in "kube-system" namespace to be "Ready" ...
	E0416 18:21:30.307667    6100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-945500" hosting pod "etcd-multinode-945500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-945500" has status "Ready":"False"
	I0416 18:21:30.307667    6100 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:30.307667    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-945500
	I0416 18:21:30.307783    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:30.307783    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:30.307783    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:30.310951    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:30.311251    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:30.311251    6100 round_trippers.go:580]     Audit-Id: b9d06344-e19d-4859-b5a5-ee75d232210d
	I0416 18:21:30.311251    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:30.311251    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:30.311251    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:30.311251    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:30.311251    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:30 GMT
	I0416 18:21:30.311458    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-945500","namespace":"kube-system","uid":"249203ba-a5d5-4e35-af8e-172d64c91440","resourceVersion":"1408","creationTimestamp":"2024-04-16T18:21:29Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.83.104:8443","kubernetes.io/config.hash":"2693abda4b2acecd43625f54801b2092","kubernetes.io/config.mirror":"2693abda4b2acecd43625f54801b2092","kubernetes.io/config.seen":"2024-04-16T18:21:23.093778187Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:21:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7939 chars]
	I0416 18:21:30.311518    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:30.311518    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:30.311518    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:30.311518    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:30.314094    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:30.314094    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:30.314094    6100 round_trippers.go:580]     Audit-Id: ce6d498e-7b38-4548-9298-f20f3a1424de
	I0416 18:21:30.314094    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:30.314094    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:30.314094    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:30.314094    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:30.314094    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:30 GMT
	I0416 18:21:30.314729    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1387","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5372 chars]
	I0416 18:21:30.315150    6100 pod_ready.go:97] node "multinode-945500" hosting pod "kube-apiserver-multinode-945500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-945500" has status "Ready":"False"
	I0416 18:21:30.315150    6100 pod_ready.go:81] duration metric: took 7.4826ms for pod "kube-apiserver-multinode-945500" in "kube-system" namespace to be "Ready" ...
	E0416 18:21:30.315207    6100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-945500" hosting pod "kube-apiserver-multinode-945500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-945500" has status "Ready":"False"
	I0416 18:21:30.315207    6100 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:30.315292    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-945500
	I0416 18:21:30.315292    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:30.315292    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:30.315292    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:30.317072    6100 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0416 18:21:30.317845    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:30.317845    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:30 GMT
	I0416 18:21:30.317845    6100 round_trippers.go:580]     Audit-Id: ed90919a-665f-40e1-8702-99be45c6731a
	I0416 18:21:30.317845    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:30.317845    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:30.317845    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:30.317845    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:30.318438    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-945500","namespace":"kube-system","uid":"01b937c2-9827-4240-83f0-3536fec5eb5e","resourceVersion":"1392","creationTimestamp":"2024-04-16T17:57:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5db71de2029227779432bddd337fc81d","kubernetes.io/config.mirror":"5db71de2029227779432bddd337fc81d","kubernetes.io/config.seen":"2024-04-16T17:57:28.101473146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7732 chars]
	I0416 18:21:30.357180    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:30.357275    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:30.357275    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:30.357275    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:30.360511    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:30.360511    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:30.360511    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:30.360511    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:30.360511    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:30 GMT
	I0416 18:21:30.360511    6100 round_trippers.go:580]     Audit-Id: 9fdf9aad-1db4-40df-a245-6dbce6256e46
	I0416 18:21:30.360511    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:30.360511    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:30.360511    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1387","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5372 chars]
	I0416 18:21:30.361326    6100 pod_ready.go:97] node "multinode-945500" hosting pod "kube-controller-manager-multinode-945500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-945500" has status "Ready":"False"
	I0416 18:21:30.361326    6100 pod_ready.go:81] duration metric: took 46.1161ms for pod "kube-controller-manager-multinode-945500" in "kube-system" namespace to be "Ready" ...
	E0416 18:21:30.361326    6100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-945500" hosting pod "kube-controller-manager-multinode-945500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-945500" has status "Ready":"False"
	I0416 18:21:30.361326    6100 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q5bdr" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:30.559470    6100 request.go:629] Waited for 197.8253ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q5bdr
	I0416 18:21:30.559470    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q5bdr
	I0416 18:21:30.559470    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:30.559470    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:30.559470    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:30.563135    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:30.563135    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:30.563135    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:30.563135    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:30 GMT
	I0416 18:21:30.563135    6100 round_trippers.go:580]     Audit-Id: aaaffc0a-e82e-40be-a3c7-bd42cc959370
	I0416 18:21:30.564059    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:30.564059    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:30.564059    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:30.564443    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q5bdr","generateName":"kube-proxy-","namespace":"kube-system","uid":"18f90e3f-dd52-44a3-918a-66181a779f58","resourceVersion":"614","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"83f1bde2-7175-4a0f-944e-61200d7e7177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83f1bde2-7175-4a0f-944e-61200d7e7177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5826 chars]
	I0416 18:21:30.762186    6100 request.go:629] Waited for 196.9037ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.83.104:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:21:30.762330    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:21:30.762466    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:30.762513    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:30.762513    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:30.766521    6100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:21:30.766521    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:30.766521    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:30.766521    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:30.766521    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:30.766521    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:30 GMT
	I0416 18:21:30.766521    6100 round_trippers.go:580]     Audit-Id: 7d08be20-9010-4ab0-a635-c801f24f84ba
	I0416 18:21:30.766521    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:30.767218    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"1253","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-mana [truncated 3831 chars]
	I0416 18:21:30.767931    6100 pod_ready.go:92] pod "kube-proxy-q5bdr" in "kube-system" namespace has status "Ready":"True"
	I0416 18:21:30.767931    6100 pod_ready.go:81] duration metric: took 406.5819ms for pod "kube-proxy-q5bdr" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:30.767931    6100 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rfxsg" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:30.949186    6100 request.go:629] Waited for 180.9542ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfxsg
	I0416 18:21:30.949443    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfxsg
	I0416 18:21:30.949443    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:30.949443    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:30.949443    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:30.953594    6100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:21:30.953594    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:30.953594    6100 round_trippers.go:580]     Audit-Id: 646374f8-6dd9-4368-9d91-3734bd9f2169
	I0416 18:21:30.953594    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:30.953594    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:30.953594    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:30.953594    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:30.953594    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:31 GMT
	I0416 18:21:30.954123    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rfxsg","generateName":"kube-proxy-","namespace":"kube-system","uid":"b740e6e0-4768-4dd4-a958-307662a92578","resourceVersion":"1410","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"83f1bde2-7175-4a0f-944e-61200d7e7177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83f1bde2-7175-4a0f-944e-61200d7e7177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6034 chars]
	I0416 18:21:31.151002    6100 request.go:629] Waited for 195.9871ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:31.151002    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:31.151002    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:31.151002    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:31.151002    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:31.156190    6100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:21:31.156287    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:31.156287    6100 round_trippers.go:580]     Audit-Id: 5060cfe5-4e82-4022-8cb6-c66802f44a56
	I0416 18:21:31.156287    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:31.156287    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:31.156287    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:31.156287    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:31.156402    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:31 GMT
	I0416 18:21:31.156715    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1387","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5372 chars]
	I0416 18:21:31.157007    6100 pod_ready.go:97] node "multinode-945500" hosting pod "kube-proxy-rfxsg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-945500" has status "Ready":"False"
	I0416 18:21:31.157007    6100 pod_ready.go:81] duration metric: took 389.0541ms for pod "kube-proxy-rfxsg" in "kube-system" namespace to be "Ready" ...
	E0416 18:21:31.157539    6100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-945500" hosting pod "kube-proxy-rfxsg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-945500" has status "Ready":"False"
	I0416 18:21:31.157539    6100 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:31.352483    6100 request.go:629] Waited for 194.7388ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-945500
	I0416 18:21:31.352658    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-945500
	I0416 18:21:31.352658    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:31.352658    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:31.352658    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:31.357523    6100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:21:31.357523    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:31.357523    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:31.357523    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:31.357523    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:31.357523    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:31 GMT
	I0416 18:21:31.357523    6100 round_trippers.go:580]     Audit-Id: 4e526a9b-21d6-4eec-9e13-ea9da79bd8c7
	I0416 18:21:31.357523    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:31.357523    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-945500","namespace":"kube-system","uid":"a09e52e8-1ac2-4c22-8a3d-57969fae85a9","resourceVersion":"1391","creationTimestamp":"2024-04-16T17:57:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4ebc73a23d79d1dece7469fd94c931d1","kubernetes.io/config.mirror":"4ebc73a23d79d1dece7469fd94c931d1","kubernetes.io/config.seen":"2024-04-16T17:57:20.694761708Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5444 chars]
	I0416 18:21:31.556678    6100 request.go:629] Waited for 197.6783ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:31.556678    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:31.556678    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:31.556678    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:31.556678    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:31.560424    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:31.560424    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:31.560424    6100 round_trippers.go:580]     Audit-Id: 11bc678b-39a9-447c-9dbf-7de32d71873f
	I0416 18:21:31.560424    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:31.561436    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:31.561436    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:31.561483    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:31.561483    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:31 GMT
	I0416 18:21:31.561841    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1387","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5372 chars]
	I0416 18:21:31.562511    6100 pod_ready.go:97] node "multinode-945500" hosting pod "kube-scheduler-multinode-945500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-945500" has status "Ready":"False"
	I0416 18:21:31.562619    6100 pod_ready.go:81] duration metric: took 405.057ms for pod "kube-scheduler-multinode-945500" in "kube-system" namespace to be "Ready" ...
	E0416 18:21:31.562619    6100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-945500" hosting pod "kube-scheduler-multinode-945500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-945500" has status "Ready":"False"
	I0416 18:21:31.562619    6100 pod_ready.go:38] duration metric: took 1.2790412s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 18:21:31.562726    6100 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 18:21:31.579389    6100 command_runner.go:130] > -16
	I0416 18:21:31.579389    6100 ops.go:34] apiserver oom_adj: -16
	I0416 18:21:31.579389    6100 kubeadm.go:591] duration metric: took 10.9763098s to restartPrimaryControlPlane
	I0416 18:21:31.579389    6100 kubeadm.go:393] duration metric: took 11.0335672s to StartCluster
	I0416 18:21:31.579389    6100 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:21:31.579389    6100 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 18:21:31.580775    6100 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:21:31.582619    6100 start.go:234] Will wait 6m0s for node &{Name: IP:172.19.83.104 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 18:21:31.582619    6100 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 18:21:31.583515    6100 out.go:177] * Enabled addons: 
	I0416 18:21:31.583347    6100 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:21:31.584058    6100 addons.go:505] duration metric: took 1.4389ms for enable addons: enabled=[]
	I0416 18:21:31.583515    6100 out.go:177] * Verifying Kubernetes components...
	I0416 18:21:31.592212    6100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:21:31.860734    6100 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 18:21:31.887581    6100 node_ready.go:35] waiting up to 6m0s for node "multinode-945500" to be "Ready" ...
	I0416 18:21:31.887793    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:31.887864    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:31.887864    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:31.887864    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:31.891761    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:31.891761    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:31.891761    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:31.891761    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:31.891761    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:32 GMT
	I0416 18:21:31.891761    6100 round_trippers.go:580]     Audit-Id: 4b28e678-231c-4674-a2ce-17b51603bcc0
	I0416 18:21:31.891761    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:31.891761    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:31.892520    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1387","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5372 chars]
	I0416 18:21:32.401801    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:32.402244    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:32.402244    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:32.402244    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:32.409822    6100 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0416 18:21:32.409822    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:32.409822    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:32.409822    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:32 GMT
	I0416 18:21:32.409822    6100 round_trippers.go:580]     Audit-Id: d8a94859-d8f5-4665-8fa6-87ee41266df6
	I0416 18:21:32.409822    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:32.409822    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:32.410477    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:32.410614    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1387","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5372 chars]
	I0416 18:21:32.901850    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:32.901937    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:32.901971    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:32.901971    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:32.908426    6100 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 18:21:32.908426    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:32.908426    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:32.908426    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:32.908426    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:32.908426    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:32.908426    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:33 GMT
	I0416 18:21:32.908426    6100 round_trippers.go:580]     Audit-Id: 9913f009-5bcf-466a-8735-95b4955ab714
	I0416 18:21:32.908426    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1387","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5372 chars]
	I0416 18:21:33.391283    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:33.391283    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:33.391283    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:33.391283    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:33.396251    6100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:21:33.396251    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:33.396251    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:33 GMT
	I0416 18:21:33.396251    6100 round_trippers.go:580]     Audit-Id: 216d57c2-e99f-413c-955c-501019c11f8d
	I0416 18:21:33.396251    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:33.396251    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:33.396251    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:33.396251    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:33.396251    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1387","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5372 chars]
	I0416 18:21:33.889817    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:33.889817    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:33.889817    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:33.889817    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:33.892580    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:33.892580    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:33.892580    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:33.892580    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:33.892580    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:33.892580    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:33.892580    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:34 GMT
	I0416 18:21:33.892580    6100 round_trippers.go:580]     Audit-Id: 6263b3d2-9ac9-46f1-9f25-a94ddbb6119c
	I0416 18:21:33.894021    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:33.894770    6100 node_ready.go:49] node "multinode-945500" has status "Ready":"True"
	I0416 18:21:33.894881    6100 node_ready.go:38] duration metric: took 2.0070897s for node "multinode-945500" to be "Ready" ...
	I0416 18:21:33.894881    6100 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 18:21:33.895060    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods
	I0416 18:21:33.895060    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:33.895060    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:33.895060    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:33.902848    6100 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0416 18:21:33.902848    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:33.902848    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:34 GMT
	I0416 18:21:33.902848    6100 round_trippers.go:580]     Audit-Id: dbb6d992-6f70-48f2-82a6-0e5d32bb5622
	I0416 18:21:33.902848    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:33.902848    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:33.902848    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:33.902848    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:33.904656    6100 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1478"},"items":[{"metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"1399","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 72676 chars]
	I0416 18:21:33.907694    6100 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-86z7h" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:33.907840    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 18:21:33.907912    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:33.907912    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:33.907912    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:33.911320    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:33.911320    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:33.911320    6100 round_trippers.go:580]     Audit-Id: 3220a1e2-d635-457b-81b1-f8894b38559f
	I0416 18:21:33.911320    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:33.911320    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:33.911320    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:33.911320    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:33.911320    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:34 GMT
	I0416 18:21:33.911320    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"1399","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0416 18:21:33.912355    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:33.912355    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:33.912355    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:33.912355    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:33.915031    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:33.915031    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:33.915031    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:33.915031    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:34 GMT
	I0416 18:21:33.915031    6100 round_trippers.go:580]     Audit-Id: 6b4144ae-725a-4980-9f4a-15a386954169
	I0416 18:21:33.915031    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:33.915031    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:33.915031    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:33.915450    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:34.417249    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 18:21:34.417374    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:34.417374    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:34.417374    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:34.421782    6100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:21:34.421782    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:34.421782    6100 round_trippers.go:580]     Audit-Id: d5d7fe68-eefc-43fe-a98f-83890e74a92a
	I0416 18:21:34.421782    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:34.421782    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:34.421782    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:34.421782    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:34.421782    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:34 GMT
	I0416 18:21:34.422806    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"1399","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0416 18:21:34.423444    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:34.423533    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:34.423533    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:34.423533    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:34.427244    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:34.427244    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:34.427244    6100 round_trippers.go:580]     Audit-Id: 4c02c3e3-84ff-48bd-9aab-90b7093918b8
	I0416 18:21:34.427244    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:34.427244    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:34.427244    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:34.427244    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:34.427244    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:34 GMT
	I0416 18:21:34.427996    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:34.916434    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 18:21:34.916434    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:34.916434    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:34.916434    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:34.921034    6100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:21:34.921034    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:34.921034    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:35 GMT
	I0416 18:21:34.921034    6100 round_trippers.go:580]     Audit-Id: 0881b3d5-2367-4d48-b656-506a6b78312e
	I0416 18:21:34.921034    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:34.921034    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:34.921034    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:34.921034    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:34.921304    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"1399","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0416 18:21:34.922030    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:34.922109    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:34.922109    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:34.922109    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:34.924843    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:34.924843    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:34.925328    6100 round_trippers.go:580]     Audit-Id: 1405c11a-6691-44f2-bec1-9cb30820a7e9
	I0416 18:21:34.925328    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:34.925328    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:34.925328    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:34.925328    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:34.925328    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:35 GMT
	I0416 18:21:34.925328    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:35.412036    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 18:21:35.412036    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:35.412036    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:35.412036    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:35.415610    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:35.416406    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:35.416406    6100 round_trippers.go:580]     Audit-Id: 22a11f50-f897-467e-ab73-4ac1ffa509dc
	I0416 18:21:35.416406    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:35.416406    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:35.416406    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:35.416543    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:35.416543    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:35 GMT
	I0416 18:21:35.416852    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"1399","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0416 18:21:35.417816    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:35.417908    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:35.417908    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:35.417908    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:35.421430    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:35.421430    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:35.421430    6100 round_trippers.go:580]     Audit-Id: dbd13b80-a695-4007-abf2-60b9594c24f1
	I0416 18:21:35.421540    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:35.421540    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:35.421540    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:35.421540    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:35.421540    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:35 GMT
	I0416 18:21:35.421943    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:35.910568    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 18:21:35.910568    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:35.910568    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:35.910568    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:35.914558    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:35.914558    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:35.915287    6100 round_trippers.go:580]     Audit-Id: 05ea3aca-fb99-4223-9ccd-5097e82f227c
	I0416 18:21:35.915287    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:35.915287    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:35.915287    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:35.915287    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:35.915287    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:36 GMT
	I0416 18:21:35.915506    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"1399","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0416 18:21:35.916187    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:35.916187    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:35.916187    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:35.916187    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:35.918772    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:35.919570    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:35.919570    6100 round_trippers.go:580]     Audit-Id: 2dd77766-3958-4b5a-ab8a-cb9c38078ffb
	I0416 18:21:35.919570    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:35.919570    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:35.919570    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:35.919570    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:35.919570    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:36 GMT
	I0416 18:21:35.919779    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:35.920080    6100 pod_ready.go:102] pod "coredns-76f75df574-86z7h" in "kube-system" namespace has status "Ready":"False"
	I0416 18:21:36.415559    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 18:21:36.415559    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:36.415559    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:36.415688    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:36.418551    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:36.418551    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:36.418551    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:36 GMT
	I0416 18:21:36.418551    6100 round_trippers.go:580]     Audit-Id: 77f9c7e1-e46b-4f0d-9c44-ba7e0aa33021
	I0416 18:21:36.418551    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:36.418551    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:36.418551    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:36.418551    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:36.419555    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"1399","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0416 18:21:36.419555    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:36.419555    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:36.419555    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:36.419555    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:36.424547    6100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:21:36.424547    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:36.424547    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:36.424547    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:36.424939    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:36 GMT
	I0416 18:21:36.424939    6100 round_trippers.go:580]     Audit-Id: ede48082-6680-4c85-b1b3-ab4a733730de
	I0416 18:21:36.424939    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:36.424939    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:36.426763    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:36.914863    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 18:21:36.914863    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:36.914863    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:36.914863    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:36.921640    6100 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 18:21:36.921640    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:36.921640    6100 round_trippers.go:580]     Audit-Id: 82613e9a-0aa0-4889-8472-cddd7ed3be27
	I0416 18:21:36.921640    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:36.921640    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:36.921640    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:36.921640    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:36.921640    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:37 GMT
	I0416 18:21:36.922498    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"1399","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0416 18:21:36.923219    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:36.923281    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:36.923281    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:36.923281    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:36.925647    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:36.926100    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:36.926100    6100 round_trippers.go:580]     Audit-Id: d5921aeb-6706-4f81-b65f-5e63d2ea2e65
	I0416 18:21:36.926100    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:36.926100    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:36.926100    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:36.926100    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:36.926100    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:37 GMT
	I0416 18:21:36.926300    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:37.411714    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-86z7h
	I0416 18:21:37.411790    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:37.411790    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:37.411790    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:37.414703    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:37.415086    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:37.415086    6100 round_trippers.go:580]     Audit-Id: 6c259737-b8c4-41a7-bf15-83bd23207d1b
	I0416 18:21:37.415086    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:37.415086    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:37.415086    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:37.415147    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:37.415147    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:37 GMT
	I0416 18:21:37.415147    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"1490","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6788 chars]
	I0416 18:21:37.416555    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:37.416622    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:37.416688    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:37.416688    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:37.422432    6100 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 18:21:37.422432    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:37.422432    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:37.422432    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:37 GMT
	I0416 18:21:37.422432    6100 round_trippers.go:580]     Audit-Id: 0003391b-7653-44ee-81ff-3505738c482a
	I0416 18:21:37.422432    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:37.422432    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:37.422432    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:37.422432    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:37.422432    6100 pod_ready.go:92] pod "coredns-76f75df574-86z7h" in "kube-system" namespace has status "Ready":"True"
	I0416 18:21:37.423396    6100 pod_ready.go:81] duration metric: took 3.5154309s for pod "coredns-76f75df574-86z7h" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:37.423396    6100 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:37.423396    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-945500
	I0416 18:21:37.423396    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:37.423396    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:37.423396    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:37.426400    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:37.426640    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:37.426640    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:37.426640    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:37.426640    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:37.426640    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:37.426640    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:37 GMT
	I0416 18:21:37.426640    6100 round_trippers.go:580]     Audit-Id: e7b46452-6f6f-4ee6-a2c4-19e06c76edaf
	I0416 18:21:37.426640    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-945500","namespace":"kube-system","uid":"7c7a0e73-a281-4231-95c7-479afeb4945c","resourceVersion":"1397","creationTimestamp":"2024-04-16T18:21:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.83.104:2379","kubernetes.io/config.hash":"b1890793e21da4e3dbcc47d4da1ff041","kubernetes.io/config.mirror":"b1890793e21da4e3dbcc47d4da1ff041","kubernetes.io/config.seen":"2024-04-16T18:21:23.147214167Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:21:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6384 chars]
	I0416 18:21:37.427230    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:37.427230    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:37.427230    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:37.427230    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:37.430400    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:37.430400    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:37.430400    6100 round_trippers.go:580]     Audit-Id: 9d27dd9a-ded4-4f73-b640-36105bfd0581
	I0416 18:21:37.430400    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:37.430400    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:37.430400    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:37.430400    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:37.430400    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:37 GMT
	I0416 18:21:37.430400    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:37.938595    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-945500
	I0416 18:21:37.938595    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:37.938685    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:37.938685    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:37.943248    6100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:21:37.943248    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:37.943248    6100 round_trippers.go:580]     Audit-Id: bf0d9622-f0eb-40d6-8fce-d8c3f54fb33f
	I0416 18:21:37.943248    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:37.943248    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:37.943347    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:37.943347    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:37.943347    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:38 GMT
	I0416 18:21:37.943347    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-945500","namespace":"kube-system","uid":"7c7a0e73-a281-4231-95c7-479afeb4945c","resourceVersion":"1397","creationTimestamp":"2024-04-16T18:21:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.83.104:2379","kubernetes.io/config.hash":"b1890793e21da4e3dbcc47d4da1ff041","kubernetes.io/config.mirror":"b1890793e21da4e3dbcc47d4da1ff041","kubernetes.io/config.seen":"2024-04-16T18:21:23.147214167Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:21:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6384 chars]
	I0416 18:21:37.944628    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:37.944628    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:37.944628    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:37.944628    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:37.950971    6100 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 18:21:37.950971    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:37.951586    6100 round_trippers.go:580]     Audit-Id: a16b1d95-efc0-4220-8118-d1ab05defa3c
	I0416 18:21:37.951586    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:37.951681    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:37.951681    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:37.951681    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:37.951681    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:38 GMT
	I0416 18:21:37.951880    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:38.436917    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-945500
	I0416 18:21:38.437023    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:38.437023    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:38.437023    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:38.441799    6100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:21:38.441799    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:38.441799    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:38.441799    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:38.441957    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:38.441957    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:38.441957    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:38 GMT
	I0416 18:21:38.441957    6100 round_trippers.go:580]     Audit-Id: 13740a6a-2225-486e-a48a-f7edb8c8dd4c
	I0416 18:21:38.442184    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-945500","namespace":"kube-system","uid":"7c7a0e73-a281-4231-95c7-479afeb4945c","resourceVersion":"1397","creationTimestamp":"2024-04-16T18:21:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.83.104:2379","kubernetes.io/config.hash":"b1890793e21da4e3dbcc47d4da1ff041","kubernetes.io/config.mirror":"b1890793e21da4e3dbcc47d4da1ff041","kubernetes.io/config.seen":"2024-04-16T18:21:23.147214167Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:21:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6384 chars]
	I0416 18:21:38.443146    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:38.443146    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:38.443146    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:38.443146    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:38.446653    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:38.447068    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:38.447130    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:38.447130    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:38.447130    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:38.447130    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:38 GMT
	I0416 18:21:38.447130    6100 round_trippers.go:580]     Audit-Id: 14dbe3de-0d14-47fb-af48-02fa2266f924
	I0416 18:21:38.447221    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:38.447435    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:38.935672    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-945500
	I0416 18:21:38.935672    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:38.935672    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:38.935672    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:38.939389    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:38.939389    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:38.939389    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:38.939389    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:38.939389    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:39 GMT
	I0416 18:21:38.939389    6100 round_trippers.go:580]     Audit-Id: ce5deb3e-13eb-4e59-b5ab-374116f13ac5
	I0416 18:21:38.939389    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:38.939389    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:38.940276    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-945500","namespace":"kube-system","uid":"7c7a0e73-a281-4231-95c7-479afeb4945c","resourceVersion":"1397","creationTimestamp":"2024-04-16T18:21:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.83.104:2379","kubernetes.io/config.hash":"b1890793e21da4e3dbcc47d4da1ff041","kubernetes.io/config.mirror":"b1890793e21da4e3dbcc47d4da1ff041","kubernetes.io/config.seen":"2024-04-16T18:21:23.147214167Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:21:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6384 chars]
	I0416 18:21:38.941170    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:38.941281    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:38.941281    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:38.941281    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:38.944552    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:38.944552    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:38.944552    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:38.944552    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:38.944552    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:39 GMT
	I0416 18:21:38.944552    6100 round_trippers.go:580]     Audit-Id: 72684c84-6ca6-4dd1-90b0-2bb49fe68be5
	I0416 18:21:38.944552    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:38.944552    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:38.945005    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:39.433328    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-945500
	I0416 18:21:39.433328    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:39.433582    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:39.433582    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:39.438155    6100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:21:39.438240    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:39.438240    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:39.438240    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:39.438322    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:39 GMT
	I0416 18:21:39.438355    6100 round_trippers.go:580]     Audit-Id: 8ed069fe-9db3-4d0d-8f9b-9817b042bb1d
	I0416 18:21:39.438355    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:39.438355    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:39.438355    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-945500","namespace":"kube-system","uid":"7c7a0e73-a281-4231-95c7-479afeb4945c","resourceVersion":"1397","creationTimestamp":"2024-04-16T18:21:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.83.104:2379","kubernetes.io/config.hash":"b1890793e21da4e3dbcc47d4da1ff041","kubernetes.io/config.mirror":"b1890793e21da4e3dbcc47d4da1ff041","kubernetes.io/config.seen":"2024-04-16T18:21:23.147214167Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:21:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6384 chars]
	I0416 18:21:39.439719    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:39.439719    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:39.439818    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:39.439818    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:39.443188    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:39.443188    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:39.443188    6100 round_trippers.go:580]     Audit-Id: 660d0d7d-4226-476e-9217-e5e60e717268
	I0416 18:21:39.443188    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:39.443188    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:39.443188    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:39.443188    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:39.443188    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:39 GMT
	I0416 18:21:39.444417    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:39.444771    6100 pod_ready.go:102] pod "etcd-multinode-945500" in "kube-system" namespace has status "Ready":"False"
	I0416 18:21:39.932115    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-945500
	I0416 18:21:39.932115    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:39.932115    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:39.932115    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:39.934835    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:39.935834    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:39.935834    6100 round_trippers.go:580]     Audit-Id: 46f3d1a5-6a00-4b6d-b031-4b5bfda076b1
	I0416 18:21:39.935834    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:39.935834    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:39.935834    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:39.935834    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:39.935834    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:40 GMT
	I0416 18:21:39.935970    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-945500","namespace":"kube-system","uid":"7c7a0e73-a281-4231-95c7-479afeb4945c","resourceVersion":"1499","creationTimestamp":"2024-04-16T18:21:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.83.104:2379","kubernetes.io/config.hash":"b1890793e21da4e3dbcc47d4da1ff041","kubernetes.io/config.mirror":"b1890793e21da4e3dbcc47d4da1ff041","kubernetes.io/config.seen":"2024-04-16T18:21:23.147214167Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:21:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6160 chars]
	I0416 18:21:39.936542    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:39.936542    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:39.936542    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:39.936542    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:39.943568    6100 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0416 18:21:39.943662    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:39.943688    6100 round_trippers.go:580]     Audit-Id: e1a9efe5-074b-4407-8f97-81b0f5e45ca4
	I0416 18:21:39.943688    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:39.943688    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:39.943688    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:39.943688    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:39.943688    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:40 GMT
	I0416 18:21:39.943688    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:39.944397    6100 pod_ready.go:92] pod "etcd-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 18:21:39.944397    6100 pod_ready.go:81] duration metric: took 2.5208579s for pod "etcd-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:39.944397    6100 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:39.944397    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-945500
	I0416 18:21:39.944397    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:39.944397    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:39.944397    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:39.947308    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:39.947308    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:39.947308    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:39.947308    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:39.947308    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:39.947308    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:39.947308    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:40 GMT
	I0416 18:21:39.947308    6100 round_trippers.go:580]     Audit-Id: bf8f1406-fc86-4b56-a692-fe908308325e
	I0416 18:21:39.947692    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-945500","namespace":"kube-system","uid":"249203ba-a5d5-4e35-af8e-172d64c91440","resourceVersion":"1488","creationTimestamp":"2024-04-16T18:21:29Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.83.104:8443","kubernetes.io/config.hash":"2693abda4b2acecd43625f54801b2092","kubernetes.io/config.mirror":"2693abda4b2acecd43625f54801b2092","kubernetes.io/config.seen":"2024-04-16T18:21:23.093778187Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:21:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7695 chars]
	I0416 18:21:39.947692    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:39.947692    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:39.947692    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:39.948240    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:39.950433    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:39.950433    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:39.950433    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:40 GMT
	I0416 18:21:39.950433    6100 round_trippers.go:580]     Audit-Id: 5cc856f2-b860-4b00-b6f0-f2b2c65d4463
	I0416 18:21:39.951472    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:39.951472    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:39.951472    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:39.951536    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:39.951769    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:39.952237    6100 pod_ready.go:92] pod "kube-apiserver-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 18:21:39.952237    6100 pod_ready.go:81] duration metric: took 7.8403ms for pod "kube-apiserver-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:39.952237    6100 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:39.952346    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-945500
	I0416 18:21:39.952346    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:39.952346    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:39.952394    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:39.955107    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:39.955107    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:39.955107    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:39.955107    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:39.955107    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:39.955107    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:40 GMT
	I0416 18:21:39.955107    6100 round_trippers.go:580]     Audit-Id: 26b3c2de-6b6a-40e4-816e-5a1da659023a
	I0416 18:21:39.955107    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:39.955107    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-945500","namespace":"kube-system","uid":"01b937c2-9827-4240-83f0-3536fec5eb5e","resourceVersion":"1496","creationTimestamp":"2024-04-16T17:57:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5db71de2029227779432bddd337fc81d","kubernetes.io/config.mirror":"5db71de2029227779432bddd337fc81d","kubernetes.io/config.seen":"2024-04-16T17:57:28.101473146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7470 chars]
	I0416 18:21:39.956087    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:39.956087    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:39.956087    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:39.956087    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:39.959080    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:39.959080    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:39.959080    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:40 GMT
	I0416 18:21:39.959080    6100 round_trippers.go:580]     Audit-Id: 9b9d4b5a-b0cd-4992-b3c9-35b42f392010
	I0416 18:21:39.959080    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:39.959080    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:39.959080    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:39.959080    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:39.959711    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:39.960176    6100 pod_ready.go:92] pod "kube-controller-manager-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 18:21:39.960176    6100 pod_ready.go:81] duration metric: took 7.9378ms for pod "kube-controller-manager-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:39.960176    6100 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q5bdr" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:39.960289    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q5bdr
	I0416 18:21:39.960289    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:39.960289    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:39.960289    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:39.962889    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:39.963445    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:39.963445    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:39.963445    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:40 GMT
	I0416 18:21:39.963445    6100 round_trippers.go:580]     Audit-Id: 7cde5edf-b459-4175-93fb-29a22c7f29b6
	I0416 18:21:39.963445    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:39.963445    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:39.963445    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:39.963627    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q5bdr","generateName":"kube-proxy-","namespace":"kube-system","uid":"18f90e3f-dd52-44a3-918a-66181a779f58","resourceVersion":"614","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"83f1bde2-7175-4a0f-944e-61200d7e7177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83f1bde2-7175-4a0f-944e-61200d7e7177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5826 chars]
	I0416 18:21:39.964245    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500-m02
	I0416 18:21:39.964311    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:39.964311    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:39.964311    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:39.966213    6100 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0416 18:21:39.967060    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:39.967060    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:39.967060    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:39.967060    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:39.967060    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:40 GMT
	I0416 18:21:39.967060    6100 round_trippers.go:580]     Audit-Id: fdfe1c9b-29b1-4349-a1d2-7d45560eb224
	I0416 18:21:39.967060    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:39.967258    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500-m02","uid":"af76672a-5dd1-48f9-a88c-292aab9fb7b9","resourceVersion":"1253","creationTimestamp":"2024-04-16T18:00:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_16T18_00_22_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T18:00:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-mana [truncated 3831 chars]
	I0416 18:21:39.967651    6100 pod_ready.go:92] pod "kube-proxy-q5bdr" in "kube-system" namespace has status "Ready":"True"
	I0416 18:21:39.967651    6100 pod_ready.go:81] duration metric: took 7.4745ms for pod "kube-proxy-q5bdr" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:39.967651    6100 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rfxsg" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:39.967807    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfxsg
	I0416 18:21:39.967807    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:39.967807    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:39.967807    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:39.971020    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:39.971020    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:39.971020    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:39.971020    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:40 GMT
	I0416 18:21:39.971020    6100 round_trippers.go:580]     Audit-Id: b0d33fde-5c6d-49ca-a035-9154f49fd9c8
	I0416 18:21:39.971020    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:39.971020    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:39.971020    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:39.971604    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rfxsg","generateName":"kube-proxy-","namespace":"kube-system","uid":"b740e6e0-4768-4dd4-a958-307662a92578","resourceVersion":"1410","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"83f1bde2-7175-4a0f-944e-61200d7e7177","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83f1bde2-7175-4a0f-944e-61200d7e7177\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6034 chars]
	I0416 18:21:39.971781    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:39.971781    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:39.971781    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:39.971781    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:39.975513    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:39.975513    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:39.975513    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:39.975513    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:39.975513    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:39.975513    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:40 GMT
	I0416 18:21:39.975513    6100 round_trippers.go:580]     Audit-Id: 81d6b4f1-b72b-4a04-b212-c91b6a4ed4a5
	I0416 18:21:39.975513    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:39.975513    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:39.975513    6100 pod_ready.go:92] pod "kube-proxy-rfxsg" in "kube-system" namespace has status "Ready":"True"
	I0416 18:21:39.975513    6100 pod_ready.go:81] duration metric: took 7.8617ms for pod "kube-proxy-rfxsg" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:39.975513    6100 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:40.164366    6100 request.go:629] Waited for 188.8427ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-945500
	I0416 18:21:40.164570    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-945500
	I0416 18:21:40.164570    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:40.164570    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:40.164630    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:40.170107    6100 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 18:21:40.170107    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:40.170107    6100 round_trippers.go:580]     Audit-Id: dc65597b-8bad-4c5b-ba54-efb95d5a6d06
	I0416 18:21:40.170107    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:40.170600    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:40.170600    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:40.170712    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:40.170712    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:40 GMT
	I0416 18:21:40.172101    6100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-945500","namespace":"kube-system","uid":"a09e52e8-1ac2-4c22-8a3d-57969fae85a9","resourceVersion":"1495","creationTimestamp":"2024-04-16T17:57:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4ebc73a23d79d1dece7469fd94c931d1","kubernetes.io/config.mirror":"4ebc73a23d79d1dece7469fd94c931d1","kubernetes.io/config.seen":"2024-04-16T17:57:20.694761708Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5200 chars]
	I0416 18:21:40.351945    6100 request.go:629] Waited for 178.7261ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:40.351945    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes/multinode-945500
	I0416 18:21:40.352301    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:40.352301    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:40.352364    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:40.355701    6100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 18:21:40.355893    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:40.356154    6100 round_trippers.go:580]     Audit-Id: fb1a54cf-8dd3-4f57-9668-350183301549
	I0416 18:21:40.356410    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:40.356410    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:40.356756    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:40.356756    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:40.356756    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:40 GMT
	I0416 18:21:40.356756    6100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-04-16T17:57:24Z","fieldsType":"Field [truncated 5245 chars]
	I0416 18:21:40.357383    6100 pod_ready.go:92] pod "kube-scheduler-multinode-945500" in "kube-system" namespace has status "Ready":"True"
	I0416 18:21:40.357383    6100 pod_ready.go:81] duration metric: took 381.8484ms for pod "kube-scheduler-multinode-945500" in "kube-system" namespace to be "Ready" ...
	I0416 18:21:40.357383    6100 pod_ready.go:38] duration metric: took 6.4621349s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 18:21:40.357383    6100 api_server.go:52] waiting for apiserver process to appear ...
	I0416 18:21:40.365891    6100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 18:21:40.388252    6100 command_runner.go:130] > 1832
	I0416 18:21:40.388395    6100 api_server.go:72] duration metric: took 8.8052754s to wait for apiserver process to appear ...
	I0416 18:21:40.388490    6100 api_server.go:88] waiting for apiserver healthz status ...
	I0416 18:21:40.388490    6100 api_server.go:253] Checking apiserver healthz at https://172.19.83.104:8443/healthz ...
	I0416 18:21:40.398159    6100 api_server.go:279] https://172.19.83.104:8443/healthz returned 200:
	ok
	I0416 18:21:40.398954    6100 round_trippers.go:463] GET https://172.19.83.104:8443/version
	I0416 18:21:40.398954    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:40.398954    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:40.398954    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:40.400128    6100 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0416 18:21:40.400128    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:40.400267    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:40.400267    6100 round_trippers.go:580]     Content-Length: 263
	I0416 18:21:40.400267    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:40 GMT
	I0416 18:21:40.400267    6100 round_trippers.go:580]     Audit-Id: 29b02fa8-179e-4e10-905a-f93eba60ae66
	I0416 18:21:40.400267    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:40.400267    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:40.400267    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:40.400267    6100 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "29",
	  "gitVersion": "v1.29.3",
	  "gitCommit": "6813625b7cd706db5bc7388921be03071e1a492d",
	  "gitTreeState": "clean",
	  "buildDate": "2024-03-14T23:58:36Z",
	  "goVersion": "go1.21.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0416 18:21:40.400478    6100 api_server.go:141] control plane version: v1.29.3
	I0416 18:21:40.400514    6100 api_server.go:131] duration metric: took 12.0231ms to wait for apiserver health ...
	I0416 18:21:40.400553    6100 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 18:21:40.553114    6100 request.go:629] Waited for 152.1242ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods
	I0416 18:21:40.553114    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods
	I0416 18:21:40.553114    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:40.553114    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:40.553114    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:40.557787    6100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 18:21:40.557787    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:40.557787    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:40 GMT
	I0416 18:21:40.557787    6100 round_trippers.go:580]     Audit-Id: 7afe4704-a15c-4f3f-8ef1-74ca8d7c3124
	I0416 18:21:40.557787    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:40.557787    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:40.557787    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:40.557787    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:40.559510    6100 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1503"},"items":[{"metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"1490","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 71649 chars]
	I0416 18:21:40.562596    6100 system_pods.go:59] 10 kube-system pods found
	I0416 18:21:40.562674    6100 system_pods.go:61] "coredns-76f75df574-86z7h" [1ca004a0-0575-4576-a5ed-ba0891a7d277] Running
	I0416 18:21:40.562674    6100 system_pods.go:61] "etcd-multinode-945500" [7c7a0e73-a281-4231-95c7-479afeb4945c] Running
	I0416 18:21:40.562674    6100 system_pods.go:61] "kindnet-7pg6g" [b4887fd4-c2ff-40a2-ab8f-89e227151faa] Running
	I0416 18:21:40.562674    6100 system_pods.go:61] "kindnet-tp7jl" [91595b62-10ee-47cb-a0c9-2ca83ad70af7] Running
	I0416 18:21:40.562674    6100 system_pods.go:61] "kube-apiserver-multinode-945500" [249203ba-a5d5-4e35-af8e-172d64c91440] Running
	I0416 18:21:40.562674    6100 system_pods.go:61] "kube-controller-manager-multinode-945500" [01b937c2-9827-4240-83f0-3536fec5eb5e] Running
	I0416 18:21:40.562674    6100 system_pods.go:61] "kube-proxy-q5bdr" [18f90e3f-dd52-44a3-918a-66181a779f58] Running
	I0416 18:21:40.562674    6100 system_pods.go:61] "kube-proxy-rfxsg" [b740e6e0-4768-4dd4-a958-307662a92578] Running
	I0416 18:21:40.562674    6100 system_pods.go:61] "kube-scheduler-multinode-945500" [a09e52e8-1ac2-4c22-8a3d-57969fae85a9] Running
	I0416 18:21:40.562674    6100 system_pods.go:61] "storage-provisioner" [3bd5cc95-eef6-473e-b6f9-898568046f1b] Running
	I0416 18:21:40.562674    6100 system_pods.go:74] duration metric: took 162.112ms to wait for pod list to return data ...
	I0416 18:21:40.562674    6100 default_sa.go:34] waiting for default service account to be created ...
	I0416 18:21:40.755578    6100 request.go:629] Waited for 192.6589ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.83.104:8443/api/v1/namespaces/default/serviceaccounts
	I0416 18:21:40.755835    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/default/serviceaccounts
	I0416 18:21:40.755931    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:40.755931    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:40.755931    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:40.759116    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:40.759821    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:40.759821    6100 round_trippers.go:580]     Content-Length: 262
	I0416 18:21:40.759821    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:40 GMT
	I0416 18:21:40.759915    6100 round_trippers.go:580]     Audit-Id: 4233f8fe-ea2b-49ab-bcca-af631fea79fd
	I0416 18:21:40.759915    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:40.759915    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:40.759915    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:40.759915    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:40.759915    6100 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1503"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"26260d2a-9800-4f2e-87ba-a34049d52e3f","resourceVersion":"332","creationTimestamp":"2024-04-16T17:57:40Z"}}]}
	I0416 18:21:40.760359    6100 default_sa.go:45] found service account: "default"
	I0416 18:21:40.760452    6100 default_sa.go:55] duration metric: took 197.6791ms for default service account to be created ...
	I0416 18:21:40.760452    6100 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 18:21:40.956893    6100 request.go:629] Waited for 196.3186ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods
	I0416 18:21:40.957238    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/namespaces/kube-system/pods
	I0416 18:21:40.957238    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:40.957238    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:40.957238    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:40.965417    6100 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0416 18:21:40.965417    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:40.965417    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:41 GMT
	I0416 18:21:40.965586    6100 round_trippers.go:580]     Audit-Id: 9b21ae42-d638-4a7c-a7df-cff709a98ea0
	I0416 18:21:40.965586    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:40.965586    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:40.965586    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:40.965586    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:40.966433    6100 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1503"},"items":[{"metadata":{"name":"coredns-76f75df574-86z7h","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"1ca004a0-0575-4576-a5ed-ba0891a7d277","resourceVersion":"1490","creationTimestamp":"2024-04-16T17:57:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"013ea4f6-c951-4629-83e9-f77ee592de2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-16T17:57:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"013ea4f6-c951-4629-83e9-f77ee592de2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 71649 chars]
	I0416 18:21:40.971169    6100 system_pods.go:86] 10 kube-system pods found
	I0416 18:21:40.971712    6100 system_pods.go:89] "coredns-76f75df574-86z7h" [1ca004a0-0575-4576-a5ed-ba0891a7d277] Running
	I0416 18:21:40.971712    6100 system_pods.go:89] "etcd-multinode-945500" [7c7a0e73-a281-4231-95c7-479afeb4945c] Running
	I0416 18:21:40.971712    6100 system_pods.go:89] "kindnet-7pg6g" [b4887fd4-c2ff-40a2-ab8f-89e227151faa] Running
	I0416 18:21:40.971712    6100 system_pods.go:89] "kindnet-tp7jl" [91595b62-10ee-47cb-a0c9-2ca83ad70af7] Running
	I0416 18:21:40.971843    6100 system_pods.go:89] "kube-apiserver-multinode-945500" [249203ba-a5d5-4e35-af8e-172d64c91440] Running
	I0416 18:21:40.971843    6100 system_pods.go:89] "kube-controller-manager-multinode-945500" [01b937c2-9827-4240-83f0-3536fec5eb5e] Running
	I0416 18:21:40.971903    6100 system_pods.go:89] "kube-proxy-q5bdr" [18f90e3f-dd52-44a3-918a-66181a779f58] Running
	I0416 18:21:40.971903    6100 system_pods.go:89] "kube-proxy-rfxsg" [b740e6e0-4768-4dd4-a958-307662a92578] Running
	I0416 18:21:40.971903    6100 system_pods.go:89] "kube-scheduler-multinode-945500" [a09e52e8-1ac2-4c22-8a3d-57969fae85a9] Running
	I0416 18:21:40.971903    6100 system_pods.go:89] "storage-provisioner" [3bd5cc95-eef6-473e-b6f9-898568046f1b] Running
	I0416 18:21:40.971968    6100 system_pods.go:126] duration metric: took 211.5043ms to wait for k8s-apps to be running ...
	I0416 18:21:40.971968    6100 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 18:21:40.981798    6100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 18:21:41.008878    6100 system_svc.go:56] duration metric: took 36.9079ms WaitForService to wait for kubelet
	I0416 18:21:41.008997    6100 kubeadm.go:576] duration metric: took 9.4258421s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 18:21:41.008997    6100 node_conditions.go:102] verifying NodePressure condition ...
	I0416 18:21:41.160620    6100 request.go:629] Waited for 151.3974ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.83.104:8443/api/v1/nodes
	I0416 18:21:41.160740    6100 round_trippers.go:463] GET https://172.19.83.104:8443/api/v1/nodes
	I0416 18:21:41.160740    6100 round_trippers.go:469] Request Headers:
	I0416 18:21:41.160740    6100 round_trippers.go:473]     Accept: application/json, */*
	I0416 18:21:41.160740    6100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0416 18:21:41.164643    6100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 18:21:41.164643    6100 round_trippers.go:577] Response Headers:
	I0416 18:21:41.164643    6100 round_trippers.go:580]     Content-Type: application/json
	I0416 18:21:41.164643    6100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1e0a47cb-3f86-4b2f-9fcc-1775b5a31159
	I0416 18:21:41.164643    6100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c492e1b5-bad0-44e2-89ec-ce860de32472
	I0416 18:21:41.164643    6100 round_trippers.go:580]     Date: Tue, 16 Apr 2024 18:21:41 GMT
	I0416 18:21:41.164643    6100 round_trippers.go:580]     Audit-Id: dbc145b1-a726-4a63-9c8a-a3bf75497182
	I0416 18:21:41.164643    6100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0416 18:21:41.165830    6100 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1503"},"items":[{"metadata":{"name":"multinode-945500","uid":"f418f56d-3033-4799-a361-3e0c3dc96699","resourceVersion":"1478","creationTimestamp":"2024-04-16T17:57:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-945500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4","minikube.k8s.io/name":"multinode-945500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_16T17_57_28_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"ma
nagedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v [truncated 10122 chars]
	I0416 18:21:41.166716    6100 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 18:21:41.166798    6100 node_conditions.go:123] node cpu capacity is 2
	I0416 18:21:41.166798    6100 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 18:21:41.166881    6100 node_conditions.go:123] node cpu capacity is 2
	I0416 18:21:41.166881    6100 node_conditions.go:105] duration metric: took 157.8756ms to run NodePressure ...
	I0416 18:21:41.166881    6100 start.go:240] waiting for startup goroutines ...
	I0416 18:21:41.166881    6100 start.go:245] waiting for cluster config update ...
	I0416 18:21:41.166967    6100 start.go:254] writing updated cluster config ...
	I0416 18:21:41.168822    6100 out.go:177] 
	I0416 18:21:41.180869    6100 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:21:41.180869    6100 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 18:21:41.183873    6100 out.go:177] * Starting "multinode-945500-m02" worker node in "multinode-945500" cluster
	I0416 18:21:41.183981    6100 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 18:21:41.183981    6100 cache.go:56] Caching tarball of preloaded images
	I0416 18:21:41.184538    6100 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 18:21:41.184538    6100 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 18:21:41.184538    6100 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 18:21:41.186224    6100 start.go:360] acquireMachinesLock for multinode-945500-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 18:21:41.186606    6100 start.go:364] duration metric: took 381.2µs to acquireMachinesLock for "multinode-945500-m02"
	I0416 18:21:41.186606    6100 start.go:96] Skipping create...Using existing machine configuration
	I0416 18:21:41.186606    6100 fix.go:54] fixHost starting: m02
	I0416 18:21:41.187265    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:21:43.122720    6100 main.go:141] libmachine: [stdout =====>] : Off
	
	I0416 18:21:43.123029    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:21:43.123029    6100 fix.go:112] recreateIfNeeded on multinode-945500-m02: state=Stopped err=<nil>
	W0416 18:21:43.123029    6100 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 18:21:43.123676    6100 out.go:177] * Restarting existing hyperv VM for "multinode-945500-m02" ...
	I0416 18:21:43.123676    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-945500-m02
	I0416 18:21:45.748144    6100 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:21:45.748144    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:21:45.748144    6100 main.go:141] libmachine: Waiting for host to start...
	I0416 18:21:45.748144    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:21:47.860568    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:21:47.860720    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:21:47.860882    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:21:50.102037    6100 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:21:50.102078    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:21:51.112105    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:21:53.127238    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:21:53.127730    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:21:53.127793    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:21:55.401118    6100 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:21:55.401306    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:21:56.414887    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:21:58.409960    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:21:58.409960    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:21:58.410889    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:22:00.689836    6100 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:22:00.690782    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:01.700444    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:22:03.673705    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:22:03.673705    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:03.674543    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:22:05.957195    6100 main.go:141] libmachine: [stdout =====>] : 
	I0416 18:22:05.957382    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:06.959242    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:22:08.966475    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:22:08.966475    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:08.966475    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:22:11.348413    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:22:11.348719    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:11.351698    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:22:13.328503    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:22:13.328503    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:13.328503    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:22:15.603692    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:22:15.603692    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:15.604348    6100 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-945500\config.json ...
	I0416 18:22:15.606274    6100 machine.go:94] provisionDockerMachine start ...
	I0416 18:22:15.606274    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:22:17.566084    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:22:17.566084    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:17.566355    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:22:19.898403    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:22:19.899352    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:19.905720    6100 main.go:141] libmachine: Using SSH client type: native
	I0416 18:22:19.906322    6100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.85.190 22 <nil> <nil>}
	I0416 18:22:19.906322    6100 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 18:22:20.042650    6100 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 18:22:20.042650    6100 buildroot.go:166] provisioning hostname "multinode-945500-m02"
	I0416 18:22:20.042650    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:22:21.954446    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:22:21.955274    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:21.955274    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:22:24.274342    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:22:24.275253    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:24.279122    6100 main.go:141] libmachine: Using SSH client type: native
	I0416 18:22:24.279191    6100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.85.190 22 <nil> <nil>}
	I0416 18:22:24.279191    6100 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-945500-m02 && echo "multinode-945500-m02" | sudo tee /etc/hostname
	I0416 18:22:24.439073    6100 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-945500-m02
	
	I0416 18:22:24.439073    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:22:26.366612    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:22:26.367251    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:26.367310    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:22:28.659724    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:22:28.659724    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:28.664426    6100 main.go:141] libmachine: Using SSH client type: native
	I0416 18:22:28.664502    6100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.85.190 22 <nil> <nil>}
	I0416 18:22:28.664502    6100 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-945500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-945500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-945500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 18:22:28.805396    6100 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 18:22:28.805396    6100 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 18:22:28.805473    6100 buildroot.go:174] setting up certificates
	I0416 18:22:28.805473    6100 provision.go:84] configureAuth start
	I0416 18:22:28.805578    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:22:30.740898    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:22:30.740898    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:30.741442    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:22:33.046580    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:22:33.047578    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:33.047672    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:22:34.967762    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:22:34.967762    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:34.967762    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:22:37.247308    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:22:37.247491    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:37.247491    6100 provision.go:143] copyHostCerts
	I0416 18:22:37.247647    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0416 18:22:37.247864    6100 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 18:22:37.247864    6100 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 18:22:37.248291    6100 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 18:22:37.248730    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0416 18:22:37.248730    6100 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 18:22:37.248730    6100 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 18:22:37.249295    6100 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 18:22:37.250155    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0416 18:22:37.250221    6100 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 18:22:37.250221    6100 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 18:22:37.250221    6100 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 18:22:37.250817    6100 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-945500-m02 san=[127.0.0.1 172.19.85.190 localhost minikube multinode-945500-m02]
	I0416 18:22:37.362535    6100 provision.go:177] copyRemoteCerts
	I0416 18:22:37.372314    6100 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 18:22:37.372314    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:22:39.281788    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:22:39.281836    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:39.281836    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:22:41.591349    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:22:41.591349    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:41.591871    6100 sshutil.go:53] new ssh client: &{IP:172.19.85.190 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\id_rsa Username:docker}
	I0416 18:22:41.697556    6100 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3249959s)
	I0416 18:22:41.698576    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0416 18:22:41.698576    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 18:22:41.744601    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0416 18:22:41.745250    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0416 18:22:41.790739    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0416 18:22:41.790917    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 18:22:41.837836    6100 provision.go:87] duration metric: took 13.0315585s to configureAuth
	I0416 18:22:41.837836    6100 buildroot.go:189] setting minikube options for container-runtime
	I0416 18:22:41.839458    6100 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:22:41.839610    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:22:43.795958    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:22:43.795958    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:43.796720    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:22:46.098211    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:22:46.098322    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:46.104009    6100 main.go:141] libmachine: Using SSH client type: native
	I0416 18:22:46.104009    6100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.85.190 22 <nil> <nil>}
	I0416 18:22:46.104533    6100 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 18:22:46.233949    6100 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 18:22:46.234075    6100 buildroot.go:70] root file system type: tmpfs
	I0416 18:22:46.234212    6100 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 18:22:46.234297    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:22:48.196256    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:22:48.196311    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:48.196311    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:22:50.533367    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:22:50.533940    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:50.538400    6100 main.go:141] libmachine: Using SSH client type: native
	I0416 18:22:50.539010    6100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.85.190 22 <nil> <nil>}
	I0416 18:22:50.539010    6100 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.83.104"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 18:22:50.694879    6100 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.83.104
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 18:22:50.695011    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:22:52.641130    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:22:52.641130    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:52.641203    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:22:54.978009    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:22:54.978009    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:54.981917    6100 main.go:141] libmachine: Using SSH client type: native
	I0416 18:22:54.982440    6100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.85.190 22 <nil> <nil>}
	I0416 18:22:54.982440    6100 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 18:22:57.116758    6100 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0416 18:22:57.116856    6100 machine.go:97] duration metric: took 41.5082239s to provisionDockerMachine
	I0416 18:22:57.116856    6100 start.go:293] postStartSetup for "multinode-945500-m02" (driver="hyperv")
	I0416 18:22:57.116856    6100 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 18:22:57.129239    6100 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 18:22:57.129239    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:22:59.050319    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:22:59.050319    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:22:59.050539    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:23:01.324036    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:23:01.324036    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:23:01.324616    6100 sshutil.go:53] new ssh client: &{IP:172.19.85.190 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\id_rsa Username:docker}
	I0416 18:23:01.435971    6100 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3064877s)
	I0416 18:23:01.445670    6100 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 18:23:01.453013    6100 command_runner.go:130] > NAME=Buildroot
	I0416 18:23:01.453013    6100 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0416 18:23:01.453076    6100 command_runner.go:130] > ID=buildroot
	I0416 18:23:01.453076    6100 command_runner.go:130] > VERSION_ID=2023.02.9
	I0416 18:23:01.453076    6100 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0416 18:23:01.453112    6100 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 18:23:01.453112    6100 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 18:23:01.453785    6100 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 18:23:01.454945    6100 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 18:23:01.455022    6100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> /etc/ssl/certs/53242.pem
	I0416 18:23:01.465558    6100 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 18:23:01.484368    6100 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 18:23:01.531205    6100 start.go:296] duration metric: took 4.4140988s for postStartSetup
	I0416 18:23:01.531205    6100 fix.go:56] duration metric: took 1m20.3400362s for fixHost
	I0416 18:23:01.531205    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:23:03.466236    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:23:03.466236    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:23:03.466466    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:23:05.859855    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:23:05.859855    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:23:05.865191    6100 main.go:141] libmachine: Using SSH client type: native
	I0416 18:23:05.865801    6100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.85.190 22 <nil> <nil>}
	I0416 18:23:05.865801    6100 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 18:23:06.005331    6100 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713291786.170114689
	
	I0416 18:23:06.005882    6100 fix.go:216] guest clock: 1713291786.170114689
	I0416 18:23:06.005882    6100 fix.go:229] Guest: 2024-04-16 18:23:06.170114689 +0000 UTC Remote: 2024-04-16 18:23:01.5312057 +0000 UTC m=+211.940753701 (delta=4.638908989s)
	I0416 18:23:06.005989    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:23:07.994063    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:23:07.994063    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:23:07.994142    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:23:10.359587    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:23:10.359587    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:23:10.364333    6100 main.go:141] libmachine: Using SSH client type: native
	I0416 18:23:10.364554    6100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.85.190 22 <nil> <nil>}
	I0416 18:23:10.364554    6100 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713291786
	I0416 18:23:10.518299    6100 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 18:23:06 UTC 2024
	
	I0416 18:23:10.518299    6100 fix.go:236] clock set: Tue Apr 16 18:23:06 UTC 2024
	 (err=<nil>)
	I0416 18:23:10.518299    6100 start.go:83] releasing machines lock for "multinode-945500-m02", held for 1m29.3266195s
	I0416 18:23:10.518689    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:23:12.533561    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:23:12.533561    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:23:12.533561    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:23:14.886419    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:23:14.886419    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:23:14.887063    6100 out.go:177] * Found network options:
	I0416 18:23:14.888107    6100 out.go:177]   - NO_PROXY=172.19.83.104
	W0416 18:23:14.888536    6100 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 18:23:14.889220    6100 out.go:177]   - NO_PROXY=172.19.83.104
	W0416 18:23:14.889643    6100 proxy.go:119] fail to check proxy env: Error ip not in block
	W0416 18:23:14.891355    6100 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 18:23:14.893485    6100 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 18:23:14.893607    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:23:14.903958    6100 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0416 18:23:14.903958    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:23:16.849662    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:23:16.849662    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:23:16.849986    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:23:16.863999    6100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:23:16.863999    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:23:16.864105    6100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:23:19.223225    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:23:19.223274    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:23:19.223677    6100 sshutil.go:53] new ssh client: &{IP:172.19.85.190 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\id_rsa Username:docker}
	I0416 18:23:19.247541    6100 main.go:141] libmachine: [stdout =====>] : 172.19.85.190
	
	I0416 18:23:19.248529    6100 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:23:19.248849    6100 sshutil.go:53] new ssh client: &{IP:172.19.85.190 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\id_rsa Username:docker}
	I0416 18:23:19.325688    6100 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0416 18:23:19.326693    6100 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.422412s)
	W0416 18:23:19.326776    6100 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 18:23:19.337461    6100 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 18:23:19.452882    6100 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0416 18:23:19.452882    6100 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0416 18:23:19.452882    6100 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 18:23:19.452882    6100 start.go:494] detecting cgroup driver to use...
	I0416 18:23:19.452882    6100 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5591378s)
	I0416 18:23:19.452882    6100 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 18:23:19.486422    6100 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0416 18:23:19.497666    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 18:23:19.524419    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 18:23:19.544059    6100 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 18:23:19.554792    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 18:23:19.586149    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 18:23:19.616115    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 18:23:19.645703    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 18:23:19.676168    6100 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 18:23:19.702038    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 18:23:19.729888    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 18:23:19.756567    6100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 18:23:19.789461    6100 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 18:23:19.807795    6100 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0416 18:23:19.819941    6100 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 18:23:19.849051    6100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:23:20.054511    6100 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 18:23:20.086480    6100 start.go:494] detecting cgroup driver to use...
	I0416 18:23:20.097132    6100 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 18:23:20.116134    6100 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0416 18:23:20.117134    6100 command_runner.go:130] > [Unit]
	I0416 18:23:20.117134    6100 command_runner.go:130] > Description=Docker Application Container Engine
	I0416 18:23:20.117600    6100 command_runner.go:130] > Documentation=https://docs.docker.com
	I0416 18:23:20.117600    6100 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0416 18:23:20.117600    6100 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0416 18:23:20.117600    6100 command_runner.go:130] > StartLimitBurst=3
	I0416 18:23:20.117600    6100 command_runner.go:130] > StartLimitIntervalSec=60
	I0416 18:23:20.117660    6100 command_runner.go:130] > [Service]
	I0416 18:23:20.117660    6100 command_runner.go:130] > Type=notify
	I0416 18:23:20.117660    6100 command_runner.go:130] > Restart=on-failure
	I0416 18:23:20.117660    6100 command_runner.go:130] > Environment=NO_PROXY=172.19.83.104
	I0416 18:23:20.117660    6100 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0416 18:23:20.117660    6100 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0416 18:23:20.117660    6100 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0416 18:23:20.117789    6100 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0416 18:23:20.117789    6100 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0416 18:23:20.117824    6100 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0416 18:23:20.117848    6100 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0416 18:23:20.117848    6100 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0416 18:23:20.117891    6100 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0416 18:23:20.117891    6100 command_runner.go:130] > ExecStart=
	I0416 18:23:20.117932    6100 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0416 18:23:20.117968    6100 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0416 18:23:20.118002    6100 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0416 18:23:20.118002    6100 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0416 18:23:20.118002    6100 command_runner.go:130] > LimitNOFILE=infinity
	I0416 18:23:20.118002    6100 command_runner.go:130] > LimitNPROC=infinity
	I0416 18:23:20.118002    6100 command_runner.go:130] > LimitCORE=infinity
	I0416 18:23:20.118002    6100 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0416 18:23:20.118002    6100 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0416 18:23:20.118002    6100 command_runner.go:130] > TasksMax=infinity
	I0416 18:23:20.118002    6100 command_runner.go:130] > TimeoutStartSec=0
	I0416 18:23:20.118002    6100 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0416 18:23:20.118002    6100 command_runner.go:130] > Delegate=yes
	I0416 18:23:20.118002    6100 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0416 18:23:20.118002    6100 command_runner.go:130] > KillMode=process
	I0416 18:23:20.118002    6100 command_runner.go:130] > [Install]
	I0416 18:23:20.118002    6100 command_runner.go:130] > WantedBy=multi-user.target
	I0416 18:23:20.127764    6100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 18:23:20.159989    6100 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 18:23:20.205206    6100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 18:23:20.239526    6100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 18:23:20.275454    6100 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0416 18:23:20.328999    6100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 18:23:20.352572    6100 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 18:23:20.388223    6100 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0416 18:23:20.400105    6100 ssh_runner.go:195] Run: which cri-dockerd
	I0416 18:23:20.405661    6100 command_runner.go:130] > /usr/bin/cri-dockerd
	I0416 18:23:20.413748    6100 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 18:23:20.430415    6100 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 18:23:20.470575    6100 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 18:23:20.651472    6100 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 18:23:20.825231    6100 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 18:23:20.825326    6100 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 18:23:20.866580    6100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:23:21.044087    6100 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 18:24:22.164247    6100 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0416 18:24:22.164993    6100 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0416 18:24:22.165413    6100 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1178561s)
	I0416 18:24:22.175773    6100 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 systemd[1]: Starting Docker Application Container Engine...
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:55.748792830Z" level=info msg="Starting up"
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:55.749765467Z" level=info msg="containerd not running, starting managed containerd"
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:55.755898330Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=663
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.786942701Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.814425869Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.814628598Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.814724712Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.814749115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.815566430Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.815679646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.815908578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.816028495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.816053599Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.816070001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.816633180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.817753338Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.822284176Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0416 18:24:22.196955    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.822425296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0416 18:24:22.197575    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.822769044Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0416 18:24:22.197618    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.822818751Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0416 18:24:22.197652    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.823871399Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0416 18:24:22.197652    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.824045424Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.824070827Z" level=info msg="metadata content store policy set" policy=shared
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.837707647Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.837777657Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.837802060Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.837824363Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.837863669Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.837963783Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838536664Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838741993Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838856109Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838880612Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838900615Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838936320Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838957423Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838979426Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839002229Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839022032Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839041235Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839060437Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839089541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839109244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839128147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839193956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839214259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839232962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0416 18:24:22.197688    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839250064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0416 18:24:22.198228    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839270167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0416 18:24:22.198267    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839298971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0416 18:24:22.198267    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839315973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0416 18:24:22.198301    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839329075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0416 18:24:22.198301    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839343777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0416 18:24:22.198357    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839357479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0416 18:24:22.198357    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839383283Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0416 18:24:22.198357    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839407386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0416 18:24:22.198357    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839420888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0416 18:24:22.198412    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839433090Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0416 18:24:22.198412    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839554107Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0416 18:24:22.198476    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839576610Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0416 18:24:22.198476    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839594613Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0416 18:24:22.198476    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839606914Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0416 18:24:22.198544    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839667723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0416 18:24:22.198544    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839763536Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0416 18:24:22.198544    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839782839Z" level=info msg="NRI interface is disabled by configuration."
	I0416 18:24:22.198544    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839994869Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0416 18:24:22.198612    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.840059878Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0416 18:24:22.198612    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.840096783Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0416 18:24:22.198612    6100 command_runner.go:130] > Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.840129388Z" level=info msg="containerd successfully booted in 0.056914s"
	I0416 18:24:22.198677    6100 command_runner.go:130] > Apr 16 18:22:56 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:56.795686761Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0416 18:24:22.198677    6100 command_runner.go:130] > Apr 16 18:22:56 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:56.861574880Z" level=info msg="Loading containers: start."
	I0416 18:24:22.198677    6100 command_runner.go:130] > Apr 16 18:22:57 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:57.135429298Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0416 18:24:22.198677    6100 command_runner.go:130] > Apr 16 18:22:57 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:57.209800097Z" level=info msg="Loading containers: done."
	I0416 18:24:22.198746    6100 command_runner.go:130] > Apr 16 18:22:57 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:57.235075293Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	I0416 18:24:22.198746    6100 command_runner.go:130] > Apr 16 18:22:57 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:57.235735779Z" level=info msg="Daemon has completed initialization"
	I0416 18:24:22.198746    6100 command_runner.go:130] > Apr 16 18:22:57 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:57.278880906Z" level=info msg="API listen on /var/run/docker.sock"
	I0416 18:24:22.198815    6100 command_runner.go:130] > Apr 16 18:22:57 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:57.279304261Z" level=info msg="API listen on [::]:2376"
	I0416 18:24:22.198815    6100 command_runner.go:130] > Apr 16 18:22:57 multinode-945500-m02 systemd[1]: Started Docker Application Container Engine.
	I0416 18:24:22.198815    6100 command_runner.go:130] > Apr 16 18:23:21 multinode-945500-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0416 18:24:22.198815    6100 command_runner.go:130] > Apr 16 18:23:21 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:23:21.236586796Z" level=info msg="Processing signal 'terminated'"
	I0416 18:24:22.198880    6100 command_runner.go:130] > Apr 16 18:23:21 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:23:21.238466158Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0416 18:24:22.198880    6100 command_runner.go:130] > Apr 16 18:23:21 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:23:21.238667364Z" level=info msg="Daemon shutdown complete"
	I0416 18:24:22.198880    6100 command_runner.go:130] > Apr 16 18:23:21 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:23:21.238824370Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0416 18:24:22.198880    6100 command_runner.go:130] > Apr 16 18:23:21 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:23:21.238874871Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0416 18:24:22.198950    6100 command_runner.go:130] > Apr 16 18:23:22 multinode-945500-m02 systemd[1]: docker.service: Deactivated successfully.
	I0416 18:24:22.198950    6100 command_runner.go:130] > Apr 16 18:23:22 multinode-945500-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0416 18:24:22.199015    6100 command_runner.go:130] > Apr 16 18:23:22 multinode-945500-m02 systemd[1]: Starting Docker Application Container Engine...
	I0416 18:24:22.199015    6100 command_runner.go:130] > Apr 16 18:23:22 multinode-945500-m02 dockerd[1036]: time="2024-04-16T18:23:22.306307286Z" level=info msg="Starting up"
	I0416 18:24:22.199015    6100 command_runner.go:130] > Apr 16 18:24:22 multinode-945500-m02 dockerd[1036]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0416 18:24:22.199080    6100 command_runner.go:130] > Apr 16 18:24:22 multinode-945500-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0416 18:24:22.199080    6100 command_runner.go:130] > Apr 16 18:24:22 multinode-945500-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0416 18:24:22.199080    6100 command_runner.go:130] > Apr 16 18:24:22 multinode-945500-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0416 18:24:22.205084    6100 out.go:177] 
	W0416 18:24:22.205732    6100 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 16 18:22:55 multinode-945500-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 18:22:55 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:55.748792830Z" level=info msg="Starting up"
	Apr 16 18:22:55 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:55.749765467Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 18:22:55 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:55.755898330Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=663
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.786942701Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.814425869Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.814628598Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.814724712Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.814749115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.815566430Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.815679646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.815908578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.816028495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.816053599Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.816070001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.816633180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.817753338Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.822284176Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.822425296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.822769044Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.822818751Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.823871399Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.824045424Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.824070827Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.837707647Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.837777657Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.837802060Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.837824363Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.837863669Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.837963783Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838536664Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838741993Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838856109Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838880612Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838900615Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838936320Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838957423Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.838979426Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839002229Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839022032Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839041235Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839060437Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839089541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839109244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839128147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839193956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839214259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839232962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839250064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839270167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839298971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839315973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839329075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839343777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839357479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839383283Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839407386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839420888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839433090Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839554107Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839576610Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839594613Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839606914Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839667723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839763536Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839782839Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.839994869Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.840059878Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.840096783Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 18:22:55 multinode-945500-m02 dockerd[663]: time="2024-04-16T18:22:55.840129388Z" level=info msg="containerd successfully booted in 0.056914s"
	Apr 16 18:22:56 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:56.795686761Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 18:22:56 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:56.861574880Z" level=info msg="Loading containers: start."
	Apr 16 18:22:57 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:57.135429298Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 16 18:22:57 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:57.209800097Z" level=info msg="Loading containers: done."
	Apr 16 18:22:57 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:57.235075293Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 18:22:57 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:57.235735779Z" level=info msg="Daemon has completed initialization"
	Apr 16 18:22:57 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:57.278880906Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 18:22:57 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:22:57.279304261Z" level=info msg="API listen on [::]:2376"
	Apr 16 18:22:57 multinode-945500-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 16 18:23:21 multinode-945500-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 18:23:21 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:23:21.236586796Z" level=info msg="Processing signal 'terminated'"
	Apr 16 18:23:21 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:23:21.238466158Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 18:23:21 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:23:21.238667364Z" level=info msg="Daemon shutdown complete"
	Apr 16 18:23:21 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:23:21.238824370Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 18:23:21 multinode-945500-m02 dockerd[656]: time="2024-04-16T18:23:21.238874871Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 18:23:22 multinode-945500-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 18:23:22 multinode-945500-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 18:23:22 multinode-945500-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 18:23:22 multinode-945500-m02 dockerd[1036]: time="2024-04-16T18:23:22.306307286Z" level=info msg="Starting up"
	Apr 16 18:24:22 multinode-945500-m02 dockerd[1036]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 16 18:24:22 multinode-945500-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 16 18:24:22 multinode-945500-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 16 18:24:22 multinode-945500-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0416 18:24:22.205732    6100 out.go:239] * 
	W0416 18:24:22.206665    6100 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 18:24:22.207553    6100 out.go:177] 
	
	
	==> Docker <==
	Apr 16 18:21:36 multinode-945500 dockerd[1053]: time="2024-04-16T18:21:36.169097032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:21:36 multinode-945500 dockerd[1053]: time="2024-04-16T18:21:36.169408739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:21:36 multinode-945500 dockerd[1053]: time="2024-04-16T18:21:36.169781348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:21:36 multinode-945500 dockerd[1053]: time="2024-04-16T18:21:36.178916549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:21:36 multinode-945500 dockerd[1053]: time="2024-04-16T18:21:36.178978751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:21:36 multinode-945500 dockerd[1053]: time="2024-04-16T18:21:36.179009451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:21:36 multinode-945500 dockerd[1053]: time="2024-04-16T18:21:36.183607753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:21:36 multinode-945500 cri-dockerd[1266]: time="2024-04-16T18:21:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c6d81ccf8e98228a5e5f8b4d7aabb420364ce2cab7b40c062e8441d9fc020dff/resolv.conf as [nameserver 172.19.80.1]"
	Apr 16 18:21:36 multinode-945500 cri-dockerd[1266]: time="2024-04-16T18:21:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/93beb03081351169eb3be7c42f2d20f7b41e9d5187345e9a5531b601038775b7/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 16 18:21:36 multinode-945500 dockerd[1053]: time="2024-04-16T18:21:36.589359313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:21:36 multinode-945500 dockerd[1053]: time="2024-04-16T18:21:36.589739422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:21:36 multinode-945500 dockerd[1053]: time="2024-04-16T18:21:36.589986127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:21:36 multinode-945500 dockerd[1053]: time="2024-04-16T18:21:36.590261533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:21:36 multinode-945500 dockerd[1053]: time="2024-04-16T18:21:36.627331052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:21:36 multinode-945500 dockerd[1053]: time="2024-04-16T18:21:36.627450354Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:21:36 multinode-945500 dockerd[1053]: time="2024-04-16T18:21:36.627465455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:21:36 multinode-945500 dockerd[1053]: time="2024-04-16T18:21:36.630085713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:21:59 multinode-945500 dockerd[1053]: time="2024-04-16T18:21:59.220228868Z" level=info msg="shim disconnected" id=9b5d8dc009cddd327248c28cdd979fda8cc661ec190b1ce926af2862c7e7c300 namespace=moby
	Apr 16 18:21:59 multinode-945500 dockerd[1047]: time="2024-04-16T18:21:59.220269071Z" level=info msg="ignoring event" container=9b5d8dc009cddd327248c28cdd979fda8cc661ec190b1ce926af2862c7e7c300 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:21:59 multinode-945500 dockerd[1053]: time="2024-04-16T18:21:59.220315674Z" level=warning msg="cleaning up after shim disconnected" id=9b5d8dc009cddd327248c28cdd979fda8cc661ec190b1ce926af2862c7e7c300 namespace=moby
	Apr 16 18:21:59 multinode-945500 dockerd[1053]: time="2024-04-16T18:21:59.220333275Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:22:12 multinode-945500 dockerd[1053]: time="2024-04-16T18:22:12.305670577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:22:12 multinode-945500 dockerd[1053]: time="2024-04-16T18:22:12.306397316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:22:12 multinode-945500 dockerd[1053]: time="2024-04-16T18:22:12.306442219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:22:12 multinode-945500 dockerd[1053]: time="2024-04-16T18:22:12.306604627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	003626f4335a2       6e38f40d628db                                                                                         2 minutes ago       Running             storage-provisioner       2                   c11cb8eefc3b2       storage-provisioner
	a3b6834a63841       8c811b4aec35f                                                                                         3 minutes ago       Running             busybox                   1                   93beb03081351       busybox-7fdf7869d9-jxvx2
	b088c9aabd7de       cbb01a7bd410d                                                                                         3 minutes ago       Running             coredns                   1                   c6d81ccf8e982       coredns-76f75df574-86z7h
	be8c417a7ef08       4950bb10b3f87                                                                                         3 minutes ago       Running             kindnet-cni               1                   ac868f1eff2af       kindnet-tp7jl
	9b5d8dc009cdd       6e38f40d628db                                                                                         3 minutes ago       Exited              storage-provisioner       1                   c11cb8eefc3b2       storage-provisioner
	d5c790ea038ef       a1d263b5dc5b0                                                                                         3 minutes ago       Running             kube-proxy                1                   d078ca040bb2c       kube-proxy-rfxsg
	d79493392db4b       3861cfcd7c04c                                                                                         3 minutes ago       Running             etcd                      0                   48f7e3a5df5bb       etcd-multinode-945500
	f57459498855a       39f995c9f1996                                                                                         3 minutes ago       Running             kube-apiserver            0                   d3e28674c5efd       kube-apiserver-multinode-945500
	64cef04cd6ae3       6052a25da3f97                                                                                         3 minutes ago       Running             kube-controller-manager   1                   ade275271db1a       kube-controller-manager-multinode-945500
	fb4097226e2f6       8c390d98f50c0                                                                                         3 minutes ago       Running             kube-scheduler            1                   7525a0a284923       kube-scheduler-multinode-945500
	1475366123af9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Exited              busybox                   0                   c72a50cfb5bde       busybox-7fdf7869d9-jxvx2
	6ad0b1d75a1e3       cbb01a7bd410d                                                                                         26 minutes ago      Exited              coredns                   0                   2ba60ece6840a       coredns-76f75df574-86z7h
	cd37920f1d544       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              26 minutes ago      Exited              kindnet-cni               0                   d2cd68d7f406d       kindnet-tp7jl
	f56880607ce1e       a1d263b5dc5b0                                                                                         27 minutes ago      Exited              kube-proxy                0                   68766d2b671ff       kube-proxy-rfxsg
	4a7c8d9808b66       8c390d98f50c0                                                                                         27 minutes ago      Exited              kube-scheduler            0                   ecb0ceb1a3fed       kube-scheduler-multinode-945500
	91288754cb0bd       6052a25da3f97                                                                                         27 minutes ago      Exited              kube-controller-manager   0                   d28c611e06055       kube-controller-manager-multinode-945500
	
	
	==> coredns [6ad0b1d75a1e] <==
	[INFO] 10.244.1.2:53430 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000153309s
	[INFO] 10.244.1.2:47690 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181411s
	[INFO] 10.244.1.2:40309 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145609s
	[INFO] 10.244.1.2:60258 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000052603s
	[INFO] 10.244.1.2:43597 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000068204s
	[INFO] 10.244.1.2:53767 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061503s
	[INFO] 10.244.1.2:54777 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056603s
	[INFO] 10.244.0.3:38964 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184311s
	[INFO] 10.244.0.3:53114 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074805s
	[INFO] 10.244.0.3:36074 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062204s
	[INFO] 10.244.0.3:60668 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090906s
	[INFO] 10.244.1.2:54659 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099206s
	[INFO] 10.244.1.2:41929 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080505s
	[INFO] 10.244.1.2:40931 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059704s
	[INFO] 10.244.1.2:48577 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000058804s
	[INFO] 10.244.0.3:33415 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000283317s
	[INFO] 10.244.0.3:52256 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000109407s
	[INFO] 10.244.0.3:34542 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000222014s
	[INFO] 10.244.0.3:59509 - 5 "PTR IN 1.80.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000278017s
	[INFO] 10.244.1.2:34647 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164509s
	[INFO] 10.244.1.2:44123 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000155309s
	[INFO] 10.244.1.2:47985 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000056403s
	[INFO] 10.244.1.2:38781 - 5 "PTR IN 1.80.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000051303s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b088c9aabd7d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = db872c9fdf31f8d8ff61123f2a1e38a38b951fa043b9e42cdb76f01d23889e560885a7bdef735e757fd28e65f13e44b1d5d7b5def31861f6a98cd0279fbc18c8
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:49531 - 17708 "HINFO IN 1524209831518215.2792346676713413178. udp 54 false 512" NXDOMAIN qr,rd,ra 129 0.04126012s
	
	
	==> describe nodes <==
	Name:               multinode-945500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-945500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=multinode-945500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T17_57_28_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 17:57:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-945500
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 18:24:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 18:21:33 +0000   Tue, 16 Apr 2024 17:57:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 18:21:33 +0000   Tue, 16 Apr 2024 17:57:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 18:21:33 +0000   Tue, 16 Apr 2024 17:57:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 18:21:33 +0000   Tue, 16 Apr 2024 18:21:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.83.104
	  Hostname:    multinode-945500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 c83d87a46b474bc4aea8745e5ece9be6
	  System UUID:                f07a2411-3a9a-ca4a-afc3-5ddc78eea33d
	  Boot ID:                    3805f3b9-d291-4409-800b-e0db40d7fbf1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-jxvx2                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-76f75df574-86z7h                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-multinode-945500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m13s
	  kube-system                 kindnet-tp7jl                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-multinode-945500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m12s
	  kube-system                 kube-controller-manager-multinode-945500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-rfxsg                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-multinode-945500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 26m                    kube-proxy       
	  Normal  Starting                 3m11s                  kube-proxy       
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)      kubelet          Node multinode-945500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)      kubelet          Node multinode-945500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)      kubelet          Node multinode-945500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     27m                    kubelet          Node multinode-945500 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  27m                    kubelet          Node multinode-945500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                    kubelet          Node multinode-945500 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           27m                    node-controller  Node multinode-945500 event: Registered Node multinode-945500 in Controller
	  Normal  NodeReady                26m                    kubelet          Node multinode-945500 status is now: NodeReady
	  Normal  Starting                 3m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m18s (x8 over 3m18s)  kubelet          Node multinode-945500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m18s (x8 over 3m18s)  kubelet          Node multinode-945500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m18s (x7 over 3m18s)  kubelet          Node multinode-945500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m1s                   node-controller  Node multinode-945500 event: Registered Node multinode-945500 in Controller
	
	
	Name:               multinode-945500-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-945500-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=multinode-945500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_16T18_00_22_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 18:00:21 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-945500-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 18:13:16 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 16 Apr 2024 18:11:34 +0000   Tue, 16 Apr 2024 18:22:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 16 Apr 2024 18:11:34 +0000   Tue, 16 Apr 2024 18:22:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 16 Apr 2024 18:11:34 +0000   Tue, 16 Apr 2024 18:22:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 16 Apr 2024 18:11:34 +0000   Tue, 16 Apr 2024 18:22:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.19.91.6
	  Hostname:    multinode-945500-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6ffb3ffe1886460d8f31c8166436085f
	  System UUID:                cd85b681-7c9d-6842-b820-50fe53a2fe10
	  Boot ID:                    391147f8-cd3e-46f1-9b23-dd3a04f0f3a0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-ns8nx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-7pg6g               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-proxy-q5bdr            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24m                kube-proxy       
	  Normal  NodeHasSufficientMemory  24m (x2 over 24m)  kubelet          Node multinode-945500-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x2 over 24m)  kubelet          Node multinode-945500-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x2 over 24m)  kubelet          Node multinode-945500-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           24m                node-controller  Node multinode-945500-m02 event: Registered Node multinode-945500-m02 in Controller
	  Normal  NodeReady                24m                kubelet          Node multinode-945500-m02 status is now: NodeReady
	  Normal  RegisteredNode           3m1s               node-controller  Node multinode-945500-m02 event: Registered Node multinode-945500-m02 in Controller
	  Normal  NodeNotReady             2m21s              node-controller  Node multinode-945500-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	              * this clock source is slow. Consider trying other clock sources
	[  +5.885487] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.696048] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.845622] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[Apr16 18:20] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +45.779013] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.163226] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[Apr16 18:21] systemd-fstab-generator[975]: Ignoring "noauto" option for root device
	[  +0.085820] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.500786] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	[  +0.184960] systemd-fstab-generator[1025]: Ignoring "noauto" option for root device
	[  +0.223323] systemd-fstab-generator[1039]: Ignoring "noauto" option for root device
	[  +2.791890] systemd-fstab-generator[1219]: Ignoring "noauto" option for root device
	[  +0.194166] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	[  +0.188456] systemd-fstab-generator[1244]: Ignoring "noauto" option for root device
	[  +0.253045] systemd-fstab-generator[1258]: Ignoring "noauto" option for root device
	[  +0.763637] systemd-fstab-generator[1368]: Ignoring "noauto" option for root device
	[  +0.085344] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.431269] systemd-fstab-generator[1503]: Ignoring "noauto" option for root device
	[  +5.836083] kauditd_printk_skb: 84 callbacks suppressed
	[  +3.190766] systemd-fstab-generator[2306]: Ignoring "noauto" option for root device
	[  +4.378905] kauditd_printk_skb: 70 callbacks suppressed
	[ +22.996345] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [d79493392db4] <==
	{"level":"info","ts":"2024-04-16T18:21:24.733443Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-16T18:21:24.733672Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-16T18:21:24.73439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 switched to configuration voters=(16790251013889734582)"}
	{"level":"info","ts":"2024-04-16T18:21:24.734475Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ba3fb579e58fbd76","local-member-id":"e902f456ac8a37b6","added-peer-id":"e902f456ac8a37b6","added-peer-peer-urls":["https://172.19.91.227:2380"]}
	{"level":"info","ts":"2024-04-16T18:21:24.73417Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-16T18:21:24.734314Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.19.83.104:2380"}
	{"level":"info","ts":"2024-04-16T18:21:24.735023Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.19.83.104:2380"}
	{"level":"info","ts":"2024-04-16T18:21:24.734822Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ba3fb579e58fbd76","local-member-id":"e902f456ac8a37b6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T18:21:24.735303Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T18:21:24.74625Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"e902f456ac8a37b6","initial-advertise-peer-urls":["https://172.19.83.104:2380"],"listen-peer-urls":["https://172.19.83.104:2380"],"advertise-client-urls":["https://172.19.83.104:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.83.104:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-16T18:21:24.747212Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-16T18:21:26.304555Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-16T18:21:26.304612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-16T18:21:26.304645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 received MsgPreVoteResp from e902f456ac8a37b6 at term 2"}
	{"level":"info","ts":"2024-04-16T18:21:26.30466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 became candidate at term 3"}
	{"level":"info","ts":"2024-04-16T18:21:26.304776Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 received MsgVoteResp from e902f456ac8a37b6 at term 3"}
	{"level":"info","ts":"2024-04-16T18:21:26.304953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e902f456ac8a37b6 became leader at term 3"}
	{"level":"info","ts":"2024-04-16T18:21:26.305039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e902f456ac8a37b6 elected leader e902f456ac8a37b6 at term 3"}
	{"level":"info","ts":"2024-04-16T18:21:26.307971Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e902f456ac8a37b6","local-member-attributes":"{Name:multinode-945500 ClientURLs:[https://172.19.83.104:2379]}","request-path":"/0/members/e902f456ac8a37b6/attributes","cluster-id":"ba3fb579e58fbd76","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-16T18:21:26.308243Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T18:21:26.30894Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T18:21:26.310085Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-16T18:21:26.310291Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-16T18:21:26.312069Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.83.104:2379"}
	{"level":"info","ts":"2024-04-16T18:21:26.31482Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:24:41 up 4 min,  0 users,  load average: 0.13, 0.13, 0.05
	Linux multinode-945500 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [be8c417a7ef0] <==
	I0416 18:23:40.325513       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:23:50.336740       1 main.go:223] Handling node with IPs: map[172.19.83.104:{}]
	I0416 18:23:50.336919       1 main.go:227] handling current node
	I0416 18:23:50.336972       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:23:50.337020       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:24:00.350810       1 main.go:223] Handling node with IPs: map[172.19.83.104:{}]
	I0416 18:24:00.350998       1 main.go:227] handling current node
	I0416 18:24:00.351018       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:24:00.351031       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:24:10.367684       1 main.go:223] Handling node with IPs: map[172.19.83.104:{}]
	I0416 18:24:10.367728       1 main.go:227] handling current node
	I0416 18:24:10.367741       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:24:10.367748       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:24:20.374818       1 main.go:223] Handling node with IPs: map[172.19.83.104:{}]
	I0416 18:24:20.374911       1 main.go:227] handling current node
	I0416 18:24:20.374922       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:24:20.374929       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:24:30.388796       1 main.go:223] Handling node with IPs: map[172.19.83.104:{}]
	I0416 18:24:30.388950       1 main.go:227] handling current node
	I0416 18:24:30.388963       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:24:30.388976       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:24:40.404452       1 main.go:223] Handling node with IPs: map[172.19.83.104:{}]
	I0416 18:24:40.404476       1 main.go:227] handling current node
	I0416 18:24:40.404488       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:24:40.404494       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [cd37920f1d54] <==
	I0416 18:12:49.078677       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:12:59.092051       1 main.go:223] Handling node with IPs: map[172.19.91.227:{}]
	I0416 18:12:59.092161       1 main.go:227] handling current node
	I0416 18:12:59.092173       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:12:59.092181       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:13:09.104371       1 main.go:223] Handling node with IPs: map[172.19.91.227:{}]
	I0416 18:13:09.104662       1 main.go:227] handling current node
	I0416 18:13:09.104694       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:13:09.104713       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:13:19.109865       1 main.go:223] Handling node with IPs: map[172.19.91.227:{}]
	I0416 18:13:19.109964       1 main.go:227] handling current node
	I0416 18:13:19.109977       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:13:19.109985       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:13:29.124276       1 main.go:223] Handling node with IPs: map[172.19.91.227:{}]
	I0416 18:13:29.124368       1 main.go:227] handling current node
	I0416 18:13:29.124424       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:13:29.124433       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:13:39.136351       1 main.go:223] Handling node with IPs: map[172.19.91.227:{}]
	I0416 18:13:39.136373       1 main.go:227] handling current node
	I0416 18:13:39.136413       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:13:39.136421       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	I0416 18:13:49.142277       1 main.go:223] Handling node with IPs: map[172.19.91.227:{}]
	I0416 18:13:49.142370       1 main.go:227] handling current node
	I0416 18:13:49.142406       1 main.go:223] Handling node with IPs: map[172.19.91.6:{}]
	I0416 18:13:49.142414       1 main.go:250] Node multinode-945500-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [f57459498855] <==
	I0416 18:21:27.751618       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0416 18:21:27.751624       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0416 18:21:27.751630       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0416 18:21:27.851427       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0416 18:21:27.859960       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0416 18:21:27.861503       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0416 18:21:27.862000       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0416 18:21:27.869318       1 aggregator.go:165] initial CRD sync complete...
	I0416 18:21:27.869852       1 autoregister_controller.go:141] Starting autoregister controller
	I0416 18:21:27.869986       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0416 18:21:27.870101       1 cache.go:39] Caches are synced for autoregister controller
	I0416 18:21:27.928054       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0416 18:21:27.928423       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0416 18:21:27.928321       1 shared_informer.go:318] Caches are synced for configmaps
	I0416 18:21:27.928375       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0416 18:21:27.935668       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0416 18:21:28.747594       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0416 18:21:29.069357       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.83.104]
	I0416 18:21:29.071330       1 controller.go:624] quota admission added evaluator for: endpoints
	I0416 18:21:29.080751       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0416 18:21:30.108882       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0416 18:21:30.309371       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0416 18:21:30.321586       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0416 18:21:30.400420       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0416 18:21:30.409273       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [64cef04cd6ae] <==
	I0416 18:21:40.405510       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0416 18:21:40.405950       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="70.807µs"
	I0416 18:21:40.406186       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="149.416µs"
	I0416 18:21:40.406982       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0416 18:21:40.411716       1 shared_informer.go:318] Caches are synced for disruption
	I0416 18:21:40.414977       1 shared_informer.go:318] Caches are synced for PVC protection
	I0416 18:21:40.416389       1 shared_informer.go:318] Caches are synced for PV protection
	I0416 18:21:40.416410       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0416 18:21:40.416915       1 shared_informer.go:318] Caches are synced for cronjob
	I0416 18:21:40.421010       1 shared_informer.go:318] Caches are synced for deployment
	I0416 18:21:40.425001       1 shared_informer.go:318] Caches are synced for taint-eviction-controller
	I0416 18:21:40.425227       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0416 18:21:40.426662       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0416 18:21:40.426940       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0416 18:21:40.501485       1 shared_informer.go:318] Caches are synced for resource quota
	I0416 18:21:40.557713       1 shared_informer.go:318] Caches are synced for resource quota
	I0416 18:21:40.940097       1 shared_informer.go:318] Caches are synced for garbage collector
	I0416 18:21:40.956236       1 shared_informer.go:318] Caches are synced for garbage collector
	I0416 18:21:40.956364       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0416 18:22:20.385413       1 event.go:376] "Event occurred" object="multinode-945500-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-945500-m02 status is now: NodeNotReady"
	I0416 18:22:20.394894       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-ns8nx" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 18:22:20.414458       1 event.go:376] "Event occurred" object="kube-system/kindnet-7pg6g" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 18:22:20.430511       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="35.135745ms"
	I0416 18:22:20.430595       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="37.102µs"
	I0416 18:22:20.440471       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-q5bdr" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-controller-manager [91288754cb0b] <==
	I0416 17:57:41.176487       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="38.505µs"
	I0416 17:57:50.419156       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="87.708µs"
	I0416 17:57:50.439046       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="77.007µs"
	I0416 17:57:52.289724       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="340.797µs"
	I0416 17:57:52.327958       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="8.879815ms"
	I0416 17:57:52.329283       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="77.899µs"
	I0416 17:57:54.522679       1 node_lifecycle_controller.go:1045] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0416 18:00:21.143291       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-945500-m02\" does not exist"
	I0416 18:00:21.160886       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7pg6g"
	I0416 18:00:21.165863       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-q5bdr"
	I0416 18:00:21.190337       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-945500-m02" podCIDRs=["10.244.1.0/24"]
	I0416 18:00:24.552622       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-945500-m02"
	I0416 18:00:24.552697       1 event.go:376] "Event occurred" object="multinode-945500-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-945500-m02 event: Registered Node multinode-945500-m02 in Controller"
	I0416 18:00:41.273225       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-945500-m02"
	I0416 18:01:05.000162       1 event.go:376] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7fdf7869d9 to 2"
	I0416 18:01:05.018037       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-ns8nx"
	I0416 18:01:05.041877       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-jxvx2"
	I0416 18:01:05.061957       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="58.524499ms"
	I0416 18:01:05.079880       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="17.398354ms"
	I0416 18:01:05.080339       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="67.502µs"
	I0416 18:01:05.093042       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="55.802µs"
	I0416 18:01:07.013162       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="7.557663ms"
	I0416 18:01:07.014558       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="1.14747ms"
	I0416 18:01:07.433900       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="7.930386ms"
	I0416 18:01:07.434257       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="37.403µs"
	
	
	==> kube-proxy [d5c790ea038e] <==
	I0416 18:21:29.388307       1 server_others.go:72] "Using iptables proxy"
	I0416 18:21:29.426833       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.83.104"]
	I0416 18:21:29.528921       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 18:21:29.528953       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 18:21:29.528966       1 server_others.go:168] "Using iptables Proxier"
	I0416 18:21:29.538942       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 18:21:29.540232       1 server.go:865] "Version info" version="v1.29.3"
	I0416 18:21:29.540261       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 18:21:29.550099       1 config.go:97] "Starting endpoint slice config controller"
	I0416 18:21:29.551804       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 18:21:29.551865       1 config.go:188] "Starting service config controller"
	I0416 18:21:29.551876       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 18:21:29.563831       1 config.go:315] "Starting node config controller"
	I0416 18:21:29.563857       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 18:21:29.652575       1 shared_informer.go:318] Caches are synced for service config
	I0416 18:21:29.652637       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 18:21:29.664920       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [f56880607ce1] <==
	I0416 17:57:41.776688       1 server_others.go:72] "Using iptables proxy"
	I0416 17:57:41.792626       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.19.91.227"]
	I0416 17:57:41.867257       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 17:57:41.867331       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 17:57:41.867350       1 server_others.go:168] "Using iptables Proxier"
	I0416 17:57:41.871330       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 17:57:41.872230       1 server.go:865] "Version info" version="v1.29.3"
	I0416 17:57:41.872370       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 17:57:41.874113       1 config.go:188] "Starting service config controller"
	I0416 17:57:41.874135       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 17:57:41.874160       1 config.go:97] "Starting endpoint slice config controller"
	I0416 17:57:41.874165       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 17:57:41.876871       1 config.go:315] "Starting node config controller"
	I0416 17:57:41.876896       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 17:57:41.974693       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 17:57:41.974749       1 shared_informer.go:318] Caches are synced for service config
	I0416 17:57:41.977426       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [4a7c8d9808b6] <==
	W0416 17:57:25.692827       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0416 17:57:25.693097       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0416 17:57:25.711042       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0416 17:57:25.711136       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0416 17:57:25.720155       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 17:57:25.720353       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 17:57:25.721550       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 17:57:25.721738       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 17:57:25.738855       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 17:57:25.738995       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0416 17:57:25.765058       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0416 17:57:25.765096       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0416 17:57:25.774340       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0416 17:57:25.774569       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0416 17:57:25.791990       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0416 17:57:25.792031       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0416 17:57:25.929298       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 17:57:25.929342       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 17:57:26.119349       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 17:57:26.119818       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0416 17:57:29.235915       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0416 18:13:58.180138       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0416 18:13:58.180716       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0416 18:13:58.181480       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0416 18:13:58.187601       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fb4097226e2f] <==
	I0416 18:21:25.257495       1 serving.go:380] Generated self-signed cert in-memory
	W0416 18:21:27.785143       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0416 18:21:27.785189       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 18:21:27.785203       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0416 18:21:27.785211       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0416 18:21:27.878195       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0416 18:21:27.878339       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 18:21:27.887678       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0416 18:21:27.889710       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0416 18:21:27.891299       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0416 18:21:27.889808       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0416 18:21:27.992836       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 18:21:32 multinode-945500 kubelet[1510]: E0416 18:21:32.199740    1510 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-jxvx2" podUID="61d6d0ec-5716-446c-acd3-845d2a3cd08e"
	Apr 16 18:21:33 multinode-945500 kubelet[1510]: I0416 18:21:33.804976    1510 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	Apr 16 18:21:36 multinode-945500 kubelet[1510]: I0416 18:21:36.402391    1510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6d81ccf8e98228a5e5f8b4d7aabb420364ce2cab7b40c062e8441d9fc020dff"
	Apr 16 18:21:36 multinode-945500 kubelet[1510]: I0416 18:21:36.480253    1510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93beb03081351169eb3be7c42f2d20f7b41e9d5187345e9a5531b601038775b7"
	Apr 16 18:21:59 multinode-945500 kubelet[1510]: I0416 18:21:59.864749    1510 scope.go:117] "RemoveContainer" containerID="2b470472d009f138d718cc53110781187914ee6cddddb0ac7c899311fe2a4954"
	Apr 16 18:21:59 multinode-945500 kubelet[1510]: I0416 18:21:59.864955    1510 scope.go:117] "RemoveContainer" containerID="9b5d8dc009cddd327248c28cdd979fda8cc661ec190b1ce926af2862c7e7c300"
	Apr 16 18:21:59 multinode-945500 kubelet[1510]: E0416 18:21:59.865310    1510 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(3bd5cc95-eef6-473e-b6f9-898568046f1b)\"" pod="kube-system/storage-provisioner" podUID="3bd5cc95-eef6-473e-b6f9-898568046f1b"
	Apr 16 18:22:12 multinode-945500 kubelet[1510]: I0416 18:22:12.199083    1510 scope.go:117] "RemoveContainer" containerID="9b5d8dc009cddd327248c28cdd979fda8cc661ec190b1ce926af2862c7e7c300"
	Apr 16 18:22:23 multinode-945500 kubelet[1510]: I0416 18:22:23.182343    1510 scope.go:117] "RemoveContainer" containerID="736259e5d03b567d7df999244f6bd4b88f7afce1ed9a214b15090bbcbaeaa99e"
	Apr 16 18:22:23 multinode-945500 kubelet[1510]: I0416 18:22:23.217984    1510 scope.go:117] "RemoveContainer" containerID="0cae708a3787a303242dc5bb68ce95ed00951d691751e0189a740d616ca400c7"
	Apr 16 18:22:23 multinode-945500 kubelet[1510]: E0416 18:22:23.244159    1510 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 18:22:23 multinode-945500 kubelet[1510]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 18:22:23 multinode-945500 kubelet[1510]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 18:22:23 multinode-945500 kubelet[1510]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 18:22:23 multinode-945500 kubelet[1510]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 18:23:23 multinode-945500 kubelet[1510]: E0416 18:23:23.241538    1510 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 18:23:23 multinode-945500 kubelet[1510]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 18:23:23 multinode-945500 kubelet[1510]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 18:23:23 multinode-945500 kubelet[1510]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 18:23:23 multinode-945500 kubelet[1510]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 18:24:23 multinode-945500 kubelet[1510]: E0416 18:24:23.250243    1510 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 18:24:23 multinode-945500 kubelet[1510]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 18:24:23 multinode-945500 kubelet[1510]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 18:24:23 multinode-945500 kubelet[1510]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 18:24:23 multinode-945500 kubelet[1510]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 18:24:34.101814   14304 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-945500 -n multinode-945500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-945500 -n multinode-945500: (11.0347118s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-945500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartMultiNode (324.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (299.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-833900 --driver=hyperv
E0416 18:41:07.151583    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-833900 --driver=hyperv: exit status 1 (4m59.7575291s)

                                                
                                                
-- stdout --
	* [NoKubernetes-833900] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-833900" primary control-plane node in "NoKubernetes-833900" cluster

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 18:40:10.094673   11636 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-833900 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-833900 -n NoKubernetes-833900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-833900 -n NoKubernetes-833900: exit status 7 (222.0647ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 18:45:09.844953   14136 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-833900" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (299.98s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (421.05s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-334400 --alsologtostderr -v=1 --driver=hyperv
E0416 18:56:07.203306    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
pause_test.go:92: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p pause-334400 --alsologtostderr -v=1 --driver=hyperv: exit status 90 (4m49.0800901s)

                                                
                                                
-- stdout --
	* [pause-334400] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "pause-334400" primary control-plane node in "pause-334400" cluster
	* Updating the running hyperv "pause-334400" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 18:55:20.881729    9908 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0416 18:55:20.938872    9908 out.go:291] Setting OutFile to fd 1768 ...
	I0416 18:55:20.939629    9908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 18:55:20.939629    9908 out.go:304] Setting ErrFile to fd 1772...
	I0416 18:55:20.939629    9908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 18:55:20.958463    9908 out.go:298] Setting JSON to false
	I0416 18:55:20.961493    9908 start.go:129] hostinfo: {"hostname":"minikube5","uptime":31350,"bootTime":1713262370,"procs":209,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0416 18:55:20.962491    9908 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0416 18:55:20.962700    9908 out.go:177] * [pause-334400] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0416 18:55:20.963747    9908 notify.go:220] Checking for updates...
	I0416 18:55:20.964437    9908 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 18:55:20.964844    9908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 18:55:20.965633    9908 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0416 18:55:20.966397    9908 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 18:55:20.967175    9908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 18:55:20.968118    9908 config.go:182] Loaded profile config "pause-334400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:55:20.968908    9908 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 18:55:25.953857    9908 out.go:177] * Using the hyperv driver based on existing profile
	I0416 18:55:25.954497    9908 start.go:297] selected driver: hyperv
	I0416 18:55:25.954497    9908 start.go:901] validating driver "hyperv" against &{Name:pause-334400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.29.3 ClusterName:pause-334400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.87.234 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 18:55:25.954497    9908 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 18:55:25.996912    9908 cni.go:84] Creating CNI manager for ""
	I0416 18:55:25.996912    9908 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0416 18:55:25.996912    9908 start.go:340] cluster config:
	{Name:pause-334400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:pause-334400 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.87.234 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 18:55:25.997550    9908 iso.go:125] acquiring lock: {Name:mka3f8eef32f5becd06d05d1d837c2a92a8fa70c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 18:55:25.998862    9908 out.go:177] * Starting "pause-334400" primary control-plane node in "pause-334400" cluster
	I0416 18:55:25.999489    9908 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 18:55:25.999489    9908 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0416 18:55:25.999489    9908 cache.go:56] Caching tarball of preloaded images
	I0416 18:55:25.999489    9908 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 18:55:26.000152    9908 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 18:55:26.000226    9908 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\pause-334400\config.json ...
	I0416 18:55:26.001718    9908 start.go:360] acquireMachinesLock for pause-334400: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 18:57:44.476164    9908 start.go:364] duration metric: took 2m18.4665808s to acquireMachinesLock for "pause-334400"
	I0416 18:57:44.477113    9908 start.go:96] Skipping create...Using existing machine configuration
	I0416 18:57:44.477142    9908 fix.go:54] fixHost starting: 
	I0416 18:57:44.477977    9908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-334400 ).state
	I0416 18:57:46.559598    9908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:57:46.559683    9908 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:57:46.559683    9908 fix.go:112] recreateIfNeeded on pause-334400: state=Running err=<nil>
	W0416 18:57:46.559754    9908 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 18:57:46.599932    9908 out.go:177] * Updating the running hyperv "pause-334400" VM ...
	I0416 18:57:46.600833    9908 machine.go:94] provisionDockerMachine start ...
	I0416 18:57:46.600978    9908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-334400 ).state
	I0416 18:57:48.667628    9908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:57:48.667707    9908 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:57:48.667815    9908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-334400 ).networkadapters[0]).ipaddresses[0]
	I0416 18:57:51.157750    9908 main.go:141] libmachine: [stdout =====>] : 172.19.87.234
	
	I0416 18:57:51.158235    9908 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:57:51.162041    9908 main.go:141] libmachine: Using SSH client type: native
	I0416 18:57:51.162041    9908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.87.234 22 <nil> <nil>}
	I0416 18:57:51.162577    9908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 18:57:51.317530    9908 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-334400
	
	I0416 18:57:51.317580    9908 buildroot.go:166] provisioning hostname "pause-334400"
	I0416 18:57:51.317644    9908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-334400 ).state
	I0416 18:57:53.497726    9908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:57:53.497726    9908 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:57:53.497993    9908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-334400 ).networkadapters[0]).ipaddresses[0]
	I0416 18:57:55.807820    9908 main.go:141] libmachine: [stdout =====>] : 172.19.87.234
	
	I0416 18:57:55.807856    9908 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:57:55.811964    9908 main.go:141] libmachine: Using SSH client type: native
	I0416 18:57:55.812096    9908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.87.234 22 <nil> <nil>}
	I0416 18:57:55.812096    9908 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-334400 && echo "pause-334400" | sudo tee /etc/hostname
	I0416 18:57:55.982825    9908 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-334400
	
	I0416 18:57:56.051867    9908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-334400 ).state
	I0416 18:57:58.307251    9908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:57:58.307251    9908 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:57:58.307251    9908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-334400 ).networkadapters[0]).ipaddresses[0]
	I0416 18:58:00.989458    9908 main.go:141] libmachine: [stdout =====>] : 172.19.87.234
	
	I0416 18:58:00.989458    9908 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:58:00.993810    9908 main.go:141] libmachine: Using SSH client type: native
	I0416 18:58:00.993810    9908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.87.234 22 <nil> <nil>}
	I0416 18:58:00.993810    9908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-334400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-334400/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-334400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 18:58:01.138124    9908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 18:58:01.138124    9908 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 18:58:01.138193    9908 buildroot.go:174] setting up certificates
	I0416 18:58:01.138193    9908 provision.go:84] configureAuth start
	I0416 18:58:01.138193    9908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-334400 ).state
	I0416 18:58:03.352908    9908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:58:03.352983    9908 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:58:03.353123    9908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-334400 ).networkadapters[0]).ipaddresses[0]
	I0416 18:58:05.829362    9908 main.go:141] libmachine: [stdout =====>] : 172.19.87.234
	
	I0416 18:58:05.829362    9908 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:58:05.829362    9908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-334400 ).state
	I0416 18:58:07.883366    9908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:58:07.883366    9908 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:58:07.883366    9908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-334400 ).networkadapters[0]).ipaddresses[0]
	I0416 18:58:10.354404    9908 main.go:141] libmachine: [stdout =====>] : 172.19.87.234
	
	I0416 18:58:10.354617    9908 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:58:10.354617    9908 provision.go:143] copyHostCerts
	I0416 18:58:10.354828    9908 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0416 18:58:10.354828    9908 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0416 18:58:10.355667    9908 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0416 18:58:10.356972    9908 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0416 18:58:10.357137    9908 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0416 18:58:10.357204    9908 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0416 18:58:10.358847    9908 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0416 18:58:10.358847    9908 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0416 18:58:10.359697    9908 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0416 18:58:10.360355    9908 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.pause-334400 san=[127.0.0.1 172.19.87.234 localhost minikube pause-334400]
	I0416 18:58:10.608221    9908 provision.go:177] copyRemoteCerts
	I0416 18:58:10.618229    9908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 18:58:10.618229    9908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-334400 ).state
	I0416 18:58:12.767832    9908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:58:12.767832    9908 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:58:12.767832    9908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-334400 ).networkadapters[0]).ipaddresses[0]
	I0416 18:58:15.260685    9908 main.go:141] libmachine: [stdout =====>] : 172.19.87.234
	
	I0416 18:58:15.261309    9908 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:58:15.261387    9908 sshutil.go:53] new ssh client: &{IP:172.19.87.234 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\pause-334400\id_rsa Username:docker}
	I0416 18:58:15.379440    9908 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7609404s)
	I0416 18:58:15.379744    9908 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0416 18:58:15.436349    9908 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 18:58:15.485562    9908 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 18:58:15.533623    9908 provision.go:87] duration metric: took 14.3945488s to configureAuth
	I0416 18:58:15.533623    9908 buildroot.go:189] setting minikube options for container-runtime
	I0416 18:58:15.533623    9908 config.go:182] Loaded profile config "pause-334400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:58:15.534155    9908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-334400 ).state
	I0416 18:58:17.522749    9908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:58:17.522749    9908 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:58:17.522749    9908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-334400 ).networkadapters[0]).ipaddresses[0]
	I0416 18:58:19.955827    9908 main.go:141] libmachine: [stdout =====>] : 172.19.87.234
	
	I0416 18:58:19.956216    9908 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:58:19.960152    9908 main.go:141] libmachine: Using SSH client type: native
	I0416 18:58:19.961042    9908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.87.234 22 <nil> <nil>}
	I0416 18:58:19.961064    9908 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 18:58:20.116261    9908 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0416 18:58:20.116261    9908 buildroot.go:70] root file system type: tmpfs
	I0416 18:58:20.116480    9908 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 18:58:20.116546    9908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-334400 ).state
	I0416 18:58:22.128138    9908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:58:22.128138    9908 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:58:22.128138    9908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-334400 ).networkadapters[0]).ipaddresses[0]
	I0416 18:58:24.596890    9908 main.go:141] libmachine: [stdout =====>] : 172.19.87.234
	
	I0416 18:58:24.597243    9908 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:58:24.600926    9908 main.go:141] libmachine: Using SSH client type: native
	I0416 18:58:24.601495    9908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.87.234 22 <nil> <nil>}
	I0416 18:58:24.601618    9908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 18:58:24.784931    9908 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 18:58:24.784931    9908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-334400 ).state
	I0416 18:58:27.088421    9908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:58:27.088421    9908 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:58:27.088514    9908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-334400 ).networkadapters[0]).ipaddresses[0]
	I0416 18:58:29.707713    9908 main.go:141] libmachine: [stdout =====>] : 172.19.87.234
	
	I0416 18:58:29.707713    9908 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:58:29.712517    9908 main.go:141] libmachine: Using SSH client type: native
	I0416 18:58:29.712517    9908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.87.234 22 <nil> <nil>}
	I0416 18:58:29.712517    9908 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 18:58:29.866515    9908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 18:58:29.866515    9908 machine.go:97] duration metric: took 43.2632248s to provisionDockerMachine
	I0416 18:58:29.866515    9908 start.go:293] postStartSetup for "pause-334400" (driver="hyperv")
	I0416 18:58:29.866515    9908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 18:58:29.876905    9908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 18:58:29.876905    9908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-334400 ).state
	I0416 18:58:32.024399    9908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:58:32.024399    9908 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:58:32.024399    9908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-334400 ).networkadapters[0]).ipaddresses[0]
	I0416 18:58:34.624411    9908 main.go:141] libmachine: [stdout =====>] : 172.19.87.234
	
	I0416 18:58:34.625421    9908 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:58:34.625580    9908 sshutil.go:53] new ssh client: &{IP:172.19.87.234 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\pause-334400\id_rsa Username:docker}
	I0416 18:58:34.754058    9908 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8768759s)
	I0416 18:58:34.766059    9908 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 18:58:34.777055    9908 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 18:58:34.777055    9908 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0416 18:58:34.777055    9908 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0416 18:58:34.778060    9908 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem -> 53242.pem in /etc/ssl/certs
	I0416 18:58:34.792051    9908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 18:58:34.818482    9908 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\53242.pem --> /etc/ssl/certs/53242.pem (1708 bytes)
	I0416 18:58:34.894040    9908 start.go:296] duration metric: took 5.0272389s for postStartSetup
	I0416 18:58:34.894130    9908 fix.go:56] duration metric: took 50.4141247s for fixHost
	I0416 18:58:34.894220    9908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-334400 ).state
	I0416 18:58:37.519615    9908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:58:37.519680    9908 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:58:37.519680    9908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-334400 ).networkadapters[0]).ipaddresses[0]
	I0416 18:58:40.345740    9908 main.go:141] libmachine: [stdout =====>] : 172.19.87.234
	
	I0416 18:58:40.345815    9908 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:58:40.353455    9908 main.go:141] libmachine: Using SSH client type: native
	I0416 18:58:40.353455    9908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.87.234 22 <nil> <nil>}
	I0416 18:58:40.354001    9908 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0416 18:58:40.512421    9908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713293920.682643601
	
	I0416 18:58:40.512476    9908 fix.go:216] guest clock: 1713293920.682643601
	I0416 18:58:40.512476    9908 fix.go:229] Guest: 2024-04-16 18:58:40.682643601 +0000 UTC Remote: 2024-04-16 18:58:34.8941308 +0000 UTC m=+194.105118101 (delta=5.788512801s)
	I0416 18:58:40.512593    9908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-334400 ).state
	I0416 18:58:42.814566    9908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:58:42.814931    9908 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:58:42.814931    9908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-334400 ).networkadapters[0]).ipaddresses[0]
	I0416 18:58:45.426321    9908 main.go:141] libmachine: [stdout =====>] : 172.19.87.234
	
	I0416 18:58:45.427119    9908 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:58:45.436618    9908 main.go:141] libmachine: Using SSH client type: native
	I0416 18:58:45.436618    9908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.87.234 22 <nil> <nil>}
	I0416 18:58:45.437548    9908 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713293920
	I0416 18:58:45.610549    9908 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr 16 18:58:40 UTC 2024
	
	I0416 18:58:45.610549    9908 fix.go:236] clock set: Tue Apr 16 18:58:40 UTC 2024
	 (err=<nil>)
	I0416 18:58:45.610549    9908 start.go:83] releasing machines lock for "pause-334400", held for 1m1.1309121s
	I0416 18:58:45.611144    9908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-334400 ).state
	I0416 18:58:48.056661    9908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:58:48.056661    9908 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:58:48.056661    9908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-334400 ).networkadapters[0]).ipaddresses[0]
	I0416 18:58:50.838448    9908 main.go:141] libmachine: [stdout =====>] : 172.19.87.234
	
	I0416 18:58:50.838770    9908 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:58:50.844100    9908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 18:58:50.844976    9908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-334400 ).state
	I0416 18:58:50.851093    9908 ssh_runner.go:195] Run: cat /version.json
	I0416 18:58:50.851093    9908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-334400 ).state
	I0416 18:58:53.412670    9908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:58:53.412788    9908 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:58:53.412788    9908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-334400 ).networkadapters[0]).ipaddresses[0]
	I0416 18:58:53.434805    9908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:58:53.434805    9908 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:58:53.434805    9908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-334400 ).networkadapters[0]).ipaddresses[0]
	I0416 18:58:56.163426    9908 main.go:141] libmachine: [stdout =====>] : 172.19.87.234
	
	I0416 18:58:56.163426    9908 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:58:56.164105    9908 sshutil.go:53] new ssh client: &{IP:172.19.87.234 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\pause-334400\id_rsa Username:docker}
	I0416 18:58:56.199825    9908 main.go:141] libmachine: [stdout =====>] : 172.19.87.234
	
	I0416 18:58:56.199825    9908 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:58:56.200833    9908 sshutil.go:53] new ssh client: &{IP:172.19.87.234 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\pause-334400\id_rsa Username:docker}
	I0416 18:58:56.272188    9908 ssh_runner.go:235] Completed: cat /version.json: (5.420696s)
	I0416 18:58:56.289668    9908 ssh_runner.go:195] Run: systemctl --version
	I0416 18:58:56.356267    9908 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.5118534s)
	I0416 18:58:56.372058    9908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 18:58:56.382770    9908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 18:58:56.398033    9908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 18:58:56.418393    9908 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0416 18:58:56.418576    9908 start.go:494] detecting cgroup driver to use...
	I0416 18:58:56.418968    9908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 18:58:56.468501    9908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 18:58:56.498657    9908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 18:58:56.520912    9908 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 18:58:56.529562    9908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 18:58:56.565321    9908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 18:58:56.605067    9908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 18:58:56.644242    9908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 18:58:56.689622    9908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 18:58:56.728940    9908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 18:58:56.766938    9908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 18:58:56.808321    9908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 18:58:56.842099    9908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 18:58:56.878101    9908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 18:58:56.912109    9908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:58:57.221648    9908 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 18:58:57.259794    9908 start.go:494] detecting cgroup driver to use...
	I0416 18:58:57.274786    9908 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 18:58:57.323788    9908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 18:58:57.364752    9908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 18:58:57.418836    9908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 18:58:57.460652    9908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 18:58:57.486736    9908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 18:58:57.533540    9908 ssh_runner.go:195] Run: which cri-dockerd
	I0416 18:58:57.551923    9908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 18:58:57.567963    9908 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 18:58:57.611057    9908 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 18:58:57.872023    9908 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 18:58:58.108260    9908 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 18:58:58.108260    9908 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 18:58:58.156281    9908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:58:58.427544    9908 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 19:00:09.750526    9908 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3189309s)
	I0416 19:00:09.759492    9908 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0416 19:00:09.812308    9908 out.go:177] 
	W0416 19:00:09.813383    9908 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 16 18:53:59 pause-334400 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 18:53:59 pause-334400 dockerd[672]: time="2024-04-16T18:53:59.144054469Z" level=info msg="Starting up"
	Apr 16 18:53:59 pause-334400 dockerd[672]: time="2024-04-16T18:53:59.145438882Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 18:53:59 pause-334400 dockerd[672]: time="2024-04-16T18:53:59.150364118Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.178528121Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.203998637Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.204185633Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.204349017Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.204367727Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.204447968Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.204537114Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.204733015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.204837369Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.204860380Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.204871986Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.204957130Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.205488704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.208663139Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.208759288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.208910166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.209008617Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.209184007Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.209327081Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.209342388Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.218007651Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.218335920Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.218417662Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.218521015Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.218540925Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.218658986Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.218906213Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219015870Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219115121Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219132430Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219227479Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219244387Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219255893Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219269000Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219281807Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219302918Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219316325Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219327530Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219351543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219363849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219374855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219386761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219397366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219413875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219429183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219448793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219462000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219475006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219489013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219499719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219511525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219527533Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219553146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219578259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219589265Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219634788Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219649796Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219660502Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219669907Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219753249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219847998Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219860605Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.220060808Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.220138348Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.220292127Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.220331447Z" level=info msg="containerd successfully booted in 0.042957s"
	Apr 16 18:54:00 pause-334400 dockerd[672]: time="2024-04-16T18:54:00.195972415Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 18:54:00 pause-334400 dockerd[672]: time="2024-04-16T18:54:00.214556102Z" level=info msg="Loading containers: start."
	Apr 16 18:54:00 pause-334400 dockerd[672]: time="2024-04-16T18:54:00.427505344Z" level=info msg="Loading containers: done."
	Apr 16 18:54:00 pause-334400 dockerd[672]: time="2024-04-16T18:54:00.444768246Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 18:54:00 pause-334400 dockerd[672]: time="2024-04-16T18:54:00.444948435Z" level=info msg="Daemon has completed initialization"
	Apr 16 18:54:00 pause-334400 dockerd[672]: time="2024-04-16T18:54:00.510686608Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 18:54:00 pause-334400 dockerd[672]: time="2024-04-16T18:54:00.511323822Z" level=info msg="API listen on [::]:2376"
	Apr 16 18:54:00 pause-334400 systemd[1]: Started Docker Application Container Engine.
	Apr 16 18:54:29 pause-334400 dockerd[672]: time="2024-04-16T18:54:29.208427817Z" level=info msg="Processing signal 'terminated'"
	Apr 16 18:54:29 pause-334400 dockerd[672]: time="2024-04-16T18:54:29.210154695Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 18:54:29 pause-334400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 18:54:29 pause-334400 dockerd[672]: time="2024-04-16T18:54:29.211210303Z" level=info msg="Daemon shutdown complete"
	Apr 16 18:54:29 pause-334400 dockerd[672]: time="2024-04-16T18:54:29.211523135Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 18:54:29 pause-334400 dockerd[672]: time="2024-04-16T18:54:29.211716955Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 18:54:30 pause-334400 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 18:54:30 pause-334400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 18:54:30 pause-334400 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 18:54:30 pause-334400 dockerd[1029]: time="2024-04-16T18:54:30.285118238Z" level=info msg="Starting up"
	Apr 16 18:54:30 pause-334400 dockerd[1029]: time="2024-04-16T18:54:30.286750405Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 18:54:30 pause-334400 dockerd[1029]: time="2024-04-16T18:54:30.291470890Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1035
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.337283092Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.364803817Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.364860623Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.364902127Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.364916829Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.364944432Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.364957533Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.365293067Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.365379376Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.365398478Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.365489788Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.365525291Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.365711010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.368705618Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.368808228Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.368975545Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369068855Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369099058Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369117060Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369130961Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369259274Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369306179Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369322781Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369338483Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369353384Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369403889Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369761626Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369901040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369988949Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370010852Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370026353Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370040155Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370053356Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370069358Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370084159Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370096960Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370109762Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370121963Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370141965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370163267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370177669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370191170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370203871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370216973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370229974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370242675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370257677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370272879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370284580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370296081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370308282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370323684Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370344586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370356587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370368488Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370480200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370778730Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370869740Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370885841Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370951548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.371035557Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.371049758Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.371400894Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.371684223Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.371827338Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.372008057Z" level=info msg="containerd successfully booted in 0.038797s"
	Apr 16 18:54:31 pause-334400 dockerd[1029]: time="2024-04-16T18:54:31.335093515Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 18:54:31 pause-334400 dockerd[1029]: time="2024-04-16T18:54:31.352183570Z" level=info msg="Loading containers: start."
	Apr 16 18:54:31 pause-334400 dockerd[1029]: time="2024-04-16T18:54:31.516459132Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 16 18:54:31 pause-334400 dockerd[1029]: time="2024-04-16T18:54:31.586774250Z" level=info msg="Loading containers: done."
	Apr 16 18:54:31 pause-334400 dockerd[1029]: time="2024-04-16T18:54:31.602106924Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 18:54:31 pause-334400 dockerd[1029]: time="2024-04-16T18:54:31.602274641Z" level=info msg="Daemon has completed initialization"
	Apr 16 18:54:31 pause-334400 dockerd[1029]: time="2024-04-16T18:54:31.646147544Z" level=info msg="API listen on [::]:2376"
	Apr 16 18:54:31 pause-334400 systemd[1]: Started Docker Application Container Engine.
	Apr 16 18:54:31 pause-334400 dockerd[1029]: time="2024-04-16T18:54:31.646165446Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 18:54:43 pause-334400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 18:54:43 pause-334400 dockerd[1029]: time="2024-04-16T18:54:43.726783596Z" level=info msg="Processing signal 'terminated'"
	Apr 16 18:54:43 pause-334400 dockerd[1029]: time="2024-04-16T18:54:43.729145539Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 18:54:43 pause-334400 dockerd[1029]: time="2024-04-16T18:54:43.729751001Z" level=info msg="Daemon shutdown complete"
	Apr 16 18:54:43 pause-334400 dockerd[1029]: time="2024-04-16T18:54:43.729810007Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 18:54:43 pause-334400 dockerd[1029]: time="2024-04-16T18:54:43.729817408Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 18:54:44 pause-334400 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 18:54:44 pause-334400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 18:54:44 pause-334400 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 18:54:44 pause-334400 dockerd[1335]: time="2024-04-16T18:54:44.804149986Z" level=info msg="Starting up"
	Apr 16 18:54:44 pause-334400 dockerd[1335]: time="2024-04-16T18:54:44.805099483Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 18:54:44 pause-334400 dockerd[1335]: time="2024-04-16T18:54:44.806230800Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1341
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.841400810Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.867879228Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868078248Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868222363Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868244765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868276068Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868298471Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868478389Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868640106Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868662808Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868675009Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868702512Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868838526Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.871967447Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872095760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872264878Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872365288Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872394791Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872418494Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872432795Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872659218Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872709423Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872728825Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872744627Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872765829Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872816434Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873193373Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873332487Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873350189Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873365491Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873382192Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873396994Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873410895Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873426897Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873442399Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873467101Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873483003Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873496604Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873520107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873535408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873550110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873598515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873614416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873629418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873642519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873656621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873670722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873687224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873701525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873714427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873728028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873745330Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873770032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873784234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873797235Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873846440Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873956451Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873974253Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873987355Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.874127969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.874146171Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.874158972Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.874404397Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.874718930Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.874938152Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.875141273Z" level=info msg="containerd successfully booted in 0.037263s"
	Apr 16 18:54:45 pause-334400 dockerd[1335]: time="2024-04-16T18:54:45.851686013Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 18:54:46 pause-334400 dockerd[1335]: time="2024-04-16T18:54:46.074298564Z" level=info msg="Loading containers: start."
	Apr 16 18:54:46 pause-334400 dockerd[1335]: time="2024-04-16T18:54:46.241711649Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 16 18:54:46 pause-334400 dockerd[1335]: time="2024-04-16T18:54:46.318668948Z" level=info msg="Loading containers: done."
	Apr 16 18:54:46 pause-334400 dockerd[1335]: time="2024-04-16T18:54:46.344353285Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 18:54:46 pause-334400 dockerd[1335]: time="2024-04-16T18:54:46.344428992Z" level=info msg="Daemon has completed initialization"
	Apr 16 18:54:46 pause-334400 dockerd[1335]: time="2024-04-16T18:54:46.388485315Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 18:54:46 pause-334400 dockerd[1335]: time="2024-04-16T18:54:46.389323401Z" level=info msg="API listen on [::]:2376"
	Apr 16 18:54:46 pause-334400 systemd[1]: Started Docker Application Container Engine.
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.127086261Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.127188978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.127202280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.127373609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.136092969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.136430026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.136595153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.136835494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.186237768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.186298078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.186323282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.186402795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.189542221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.193913553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.193945559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.194307119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.418505568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.418659794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.418692700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.418970046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.501981949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.502321306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.502403920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.502692368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.524520424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.524822975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.524860181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.525463282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.603062178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.604042543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.608287153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.608622410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:55:16 pause-334400 dockerd[1341]: time="2024-04-16T18:55:16.188903248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:55:16 pause-334400 dockerd[1341]: time="2024-04-16T18:55:16.189745548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:55:16 pause-334400 dockerd[1341]: time="2024-04-16T18:55:16.189869363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:55:16 pause-334400 dockerd[1341]: time="2024-04-16T18:55:16.190791073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:55:16 pause-334400 dockerd[1341]: time="2024-04-16T18:55:16.464195909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:55:16 pause-334400 dockerd[1341]: time="2024-04-16T18:55:16.466458079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:55:16 pause-334400 dockerd[1341]: time="2024-04-16T18:55:16.466615398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:55:16 pause-334400 dockerd[1341]: time="2024-04-16T18:55:16.466914933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:55:17 pause-334400 dockerd[1341]: time="2024-04-16T18:55:17.866904442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:55:17 pause-334400 dockerd[1341]: time="2024-04-16T18:55:17.867047659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:55:17 pause-334400 dockerd[1341]: time="2024-04-16T18:55:17.867063461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:55:17 pause-334400 dockerd[1341]: time="2024-04-16T18:55:17.867627928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:55:18 pause-334400 dockerd[1341]: time="2024-04-16T18:55:18.312857048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:55:18 pause-334400 dockerd[1341]: time="2024-04-16T18:55:18.313069072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:55:18 pause-334400 dockerd[1341]: time="2024-04-16T18:55:18.313098276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:55:18 pause-334400 dockerd[1341]: time="2024-04-16T18:55:18.313263595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:58:58 pause-334400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.621999568Z" level=info msg="Processing signal 'terminated'"
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.807249476Z" level=info msg="ignoring event" container=e8005ff3c4657cb34819165ecb48cb9019b29277906428f55f19ec1d6a57ff43 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.809696686Z" level=info msg="shim disconnected" id=e8005ff3c4657cb34819165ecb48cb9019b29277906428f55f19ec1d6a57ff43 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.809799195Z" level=warning msg="cleaning up after shim disconnected" id=e8005ff3c4657cb34819165ecb48cb9019b29277906428f55f19ec1d6a57ff43 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.809810396Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.819122496Z" level=info msg="ignoring event" container=66c6dba2cbcd35ba5f85c71d9856bd7d33d3d48e40dc7310bc1da2675fc23166 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.819649841Z" level=info msg="shim disconnected" id=66c6dba2cbcd35ba5f85c71d9856bd7d33d3d48e40dc7310bc1da2675fc23166 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.819733048Z" level=warning msg="cleaning up after shim disconnected" id=66c6dba2cbcd35ba5f85c71d9856bd7d33d3d48e40dc7310bc1da2675fc23166 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.819744249Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.843735610Z" level=info msg="ignoring event" container=3894d46335dba82896d8ab9b9be40632713f24aa59ab0693ead9c3d6933353a5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.844802901Z" level=info msg="shim disconnected" id=3894d46335dba82896d8ab9b9be40632713f24aa59ab0693ead9c3d6933353a5 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.845009519Z" level=warning msg="cleaning up after shim disconnected" id=3894d46335dba82896d8ab9b9be40632713f24aa59ab0693ead9c3d6933353a5 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.845067124Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.853795474Z" level=info msg="ignoring event" container=f1384b9ad98a9b3db8874f6c20017f6d7102d3393fb9e72830204ace12d96028 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.854252513Z" level=info msg="shim disconnected" id=f1384b9ad98a9b3db8874f6c20017f6d7102d3393fb9e72830204ace12d96028 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.855241698Z" level=warning msg="cleaning up after shim disconnected" id=f1384b9ad98a9b3db8874f6c20017f6d7102d3393fb9e72830204ace12d96028 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.855400211Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.873912801Z" level=info msg="ignoring event" container=87627c61554c5bf0d835f282207bc860f90e42f658d5b2202186dc1d60805efb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.874005709Z" level=info msg="ignoring event" container=b46f9243c365d23b3c6c7b380d904a7fac7d1d4b2f4f36bdb79758a1419e32a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.875556142Z" level=info msg="shim disconnected" id=b46f9243c365d23b3c6c7b380d904a7fac7d1d4b2f4f36bdb79758a1419e32a3 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.875623148Z" level=warning msg="cleaning up after shim disconnected" id=b46f9243c365d23b3c6c7b380d904a7fac7d1d4b2f4f36bdb79758a1419e32a3 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.875634049Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.886102748Z" level=info msg="ignoring event" container=67fe5eaae833967886117e3cb8f13b0c20faf89273fdf02fa26c3a1135ab2976 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.886416575Z" level=info msg="shim disconnected" id=67fe5eaae833967886117e3cb8f13b0c20faf89273fdf02fa26c3a1135ab2976 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.888649567Z" level=warning msg="cleaning up after shim disconnected" id=67fe5eaae833967886117e3cb8f13b0c20faf89273fdf02fa26c3a1135ab2976 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.888916790Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.897011585Z" level=info msg="shim disconnected" id=87627c61554c5bf0d835f282207bc860f90e42f658d5b2202186dc1d60805efb namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.898670827Z" level=warning msg="cleaning up after shim disconnected" id=87627c61554c5bf0d835f282207bc860f90e42f658d5b2202186dc1d60805efb namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.898796338Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.915793698Z" level=info msg="shim disconnected" id=fb1de6e7c040b967267f2fc48d4804c780ba0e9dabf724bda72fab5dd6f7aca3 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.915921709Z" level=warning msg="cleaning up after shim disconnected" id=fb1de6e7c040b967267f2fc48d4804c780ba0e9dabf724bda72fab5dd6f7aca3 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.915934710Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.916987900Z" level=info msg="shim disconnected" id=289b9a8be28ed2b175c9a155d6fd59dc07056b4b3bc1767b7b727eaaa8059341 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.917253823Z" level=info msg="ignoring event" container=fb1de6e7c040b967267f2fc48d4804c780ba0e9dabf724bda72fab5dd6f7aca3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.917371733Z" level=info msg="ignoring event" container=289b9a8be28ed2b175c9a155d6fd59dc07056b4b3bc1767b7b727eaaa8059341 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.917404436Z" level=info msg="ignoring event" container=56bf229084a1798cc6506e97a2e264b00084e846bb6db8afc46edcfdf5762123 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.917036804Z" level=warning msg="cleaning up after shim disconnected" id=289b9a8be28ed2b175c9a155d6fd59dc07056b4b3bc1767b7b727eaaa8059341 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.917473342Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.932478731Z" level=info msg="shim disconnected" id=56bf229084a1798cc6506e97a2e264b00084e846bb6db8afc46edcfdf5762123 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.932741253Z" level=warning msg="cleaning up after shim disconnected" id=56bf229084a1798cc6506e97a2e264b00084e846bb6db8afc46edcfdf5762123 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.932840562Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:59:03 pause-334400 dockerd[1335]: time="2024-04-16T18:59:03.747004588Z" level=info msg="ignoring event" container=1cac56013cab1560f2c8f00ad8845165e216a0fd5bbd8a774dd14ba8a42160a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:59:03 pause-334400 dockerd[1341]: time="2024-04-16T18:59:03.749992145Z" level=info msg="shim disconnected" id=1cac56013cab1560f2c8f00ad8845165e216a0fd5bbd8a774dd14ba8a42160a8 namespace=moby
	Apr 16 18:59:03 pause-334400 dockerd[1341]: time="2024-04-16T18:59:03.756333890Z" level=warning msg="cleaning up after shim disconnected" id=1cac56013cab1560f2c8f00ad8845165e216a0fd5bbd8a774dd14ba8a42160a8 namespace=moby
	Apr 16 18:59:03 pause-334400 dockerd[1341]: time="2024-04-16T18:59:03.756420297Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:59:08 pause-334400 dockerd[1335]: time="2024-04-16T18:59:08.721130426Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=9ee1d12a013f97405b30b2392817c7dcc4a9bb16781d51649279c155d91f9f60
	Apr 16 18:59:08 pause-334400 dockerd[1335]: time="2024-04-16T18:59:08.760586284Z" level=info msg="ignoring event" container=9ee1d12a013f97405b30b2392817c7dcc4a9bb16781d51649279c155d91f9f60 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:59:08 pause-334400 dockerd[1341]: time="2024-04-16T18:59:08.760981171Z" level=info msg="shim disconnected" id=9ee1d12a013f97405b30b2392817c7dcc4a9bb16781d51649279c155d91f9f60 namespace=moby
	Apr 16 18:59:08 pause-334400 dockerd[1341]: time="2024-04-16T18:59:08.761516753Z" level=warning msg="cleaning up after shim disconnected" id=9ee1d12a013f97405b30b2392817c7dcc4a9bb16781d51649279c155d91f9f60 namespace=moby
	Apr 16 18:59:08 pause-334400 dockerd[1341]: time="2024-04-16T18:59:08.761627349Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:59:08 pause-334400 dockerd[1341]: time="2024-04-16T18:59:08.780697501Z" level=warning msg="cleanup warnings time=\"2024-04-16T18:59:08Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 16 18:59:08 pause-334400 dockerd[1335]: time="2024-04-16T18:59:08.805663352Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 18:59:08 pause-334400 dockerd[1335]: time="2024-04-16T18:59:08.806727216Z" level=info msg="Daemon shutdown complete"
	Apr 16 18:59:08 pause-334400 dockerd[1335]: time="2024-04-16T18:59:08.806900910Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 18:59:08 pause-334400 dockerd[1335]: time="2024-04-16T18:59:08.806912309Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 18:59:09 pause-334400 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 18:59:09 pause-334400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 18:59:09 pause-334400 systemd[1]: docker.service: Consumed 6.733s CPU time.
	Apr 16 18:59:09 pause-334400 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 18:59:09 pause-334400 dockerd[4154]: time="2024-04-16T18:59:09.887046510Z" level=info msg="Starting up"
	Apr 16 19:00:09 pause-334400 dockerd[4154]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 16 19:00:09 pause-334400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 16 19:00:09 pause-334400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 16 19:00:09 pause-334400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 16 18:53:59 pause-334400 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 18:53:59 pause-334400 dockerd[672]: time="2024-04-16T18:53:59.144054469Z" level=info msg="Starting up"
	Apr 16 18:53:59 pause-334400 dockerd[672]: time="2024-04-16T18:53:59.145438882Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 18:53:59 pause-334400 dockerd[672]: time="2024-04-16T18:53:59.150364118Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.178528121Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.203998637Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.204185633Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.204349017Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.204367727Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.204447968Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.204537114Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.204733015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.204837369Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.204860380Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.204871986Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.204957130Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.205488704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.208663139Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.208759288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.208910166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.209008617Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.209184007Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.209327081Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.209342388Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.218007651Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.218335920Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.218417662Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.218521015Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.218540925Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.218658986Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.218906213Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219015870Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219115121Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219132430Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219227479Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219244387Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219255893Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219269000Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219281807Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219302918Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219316325Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219327530Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219351543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219363849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219374855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219386761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219397366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219413875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219429183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219448793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219462000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219475006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219489013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219499719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219511525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219527533Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219553146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219578259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219589265Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219634788Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219649796Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219660502Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219669907Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219753249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219847998Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219860605Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.220060808Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.220138348Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.220292127Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.220331447Z" level=info msg="containerd successfully booted in 0.042957s"
	Apr 16 18:54:00 pause-334400 dockerd[672]: time="2024-04-16T18:54:00.195972415Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 18:54:00 pause-334400 dockerd[672]: time="2024-04-16T18:54:00.214556102Z" level=info msg="Loading containers: start."
	Apr 16 18:54:00 pause-334400 dockerd[672]: time="2024-04-16T18:54:00.427505344Z" level=info msg="Loading containers: done."
	Apr 16 18:54:00 pause-334400 dockerd[672]: time="2024-04-16T18:54:00.444768246Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 18:54:00 pause-334400 dockerd[672]: time="2024-04-16T18:54:00.444948435Z" level=info msg="Daemon has completed initialization"
	Apr 16 18:54:00 pause-334400 dockerd[672]: time="2024-04-16T18:54:00.510686608Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 18:54:00 pause-334400 dockerd[672]: time="2024-04-16T18:54:00.511323822Z" level=info msg="API listen on [::]:2376"
	Apr 16 18:54:00 pause-334400 systemd[1]: Started Docker Application Container Engine.
	Apr 16 18:54:29 pause-334400 dockerd[672]: time="2024-04-16T18:54:29.208427817Z" level=info msg="Processing signal 'terminated'"
	Apr 16 18:54:29 pause-334400 dockerd[672]: time="2024-04-16T18:54:29.210154695Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 18:54:29 pause-334400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 18:54:29 pause-334400 dockerd[672]: time="2024-04-16T18:54:29.211210303Z" level=info msg="Daemon shutdown complete"
	Apr 16 18:54:29 pause-334400 dockerd[672]: time="2024-04-16T18:54:29.211523135Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 18:54:29 pause-334400 dockerd[672]: time="2024-04-16T18:54:29.211716955Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 18:54:30 pause-334400 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 18:54:30 pause-334400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 18:54:30 pause-334400 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 18:54:30 pause-334400 dockerd[1029]: time="2024-04-16T18:54:30.285118238Z" level=info msg="Starting up"
	Apr 16 18:54:30 pause-334400 dockerd[1029]: time="2024-04-16T18:54:30.286750405Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 18:54:30 pause-334400 dockerd[1029]: time="2024-04-16T18:54:30.291470890Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1035
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.337283092Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.364803817Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.364860623Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.364902127Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.364916829Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.364944432Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.364957533Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.365293067Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.365379376Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.365398478Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.365489788Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.365525291Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.365711010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.368705618Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.368808228Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.368975545Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369068855Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369099058Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369117060Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369130961Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369259274Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369306179Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369322781Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369338483Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369353384Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369403889Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369761626Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369901040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369988949Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370010852Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370026353Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370040155Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370053356Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370069358Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370084159Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370096960Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370109762Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370121963Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370141965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370163267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370177669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370191170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370203871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370216973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370229974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370242675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370257677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370272879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370284580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370296081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370308282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370323684Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370344586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370356587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370368488Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370480200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370778730Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370869740Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370885841Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370951548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.371035557Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.371049758Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.371400894Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.371684223Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.371827338Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.372008057Z" level=info msg="containerd successfully booted in 0.038797s"
	Apr 16 18:54:31 pause-334400 dockerd[1029]: time="2024-04-16T18:54:31.335093515Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 18:54:31 pause-334400 dockerd[1029]: time="2024-04-16T18:54:31.352183570Z" level=info msg="Loading containers: start."
	Apr 16 18:54:31 pause-334400 dockerd[1029]: time="2024-04-16T18:54:31.516459132Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 16 18:54:31 pause-334400 dockerd[1029]: time="2024-04-16T18:54:31.586774250Z" level=info msg="Loading containers: done."
	Apr 16 18:54:31 pause-334400 dockerd[1029]: time="2024-04-16T18:54:31.602106924Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 18:54:31 pause-334400 dockerd[1029]: time="2024-04-16T18:54:31.602274641Z" level=info msg="Daemon has completed initialization"
	Apr 16 18:54:31 pause-334400 dockerd[1029]: time="2024-04-16T18:54:31.646147544Z" level=info msg="API listen on [::]:2376"
	Apr 16 18:54:31 pause-334400 systemd[1]: Started Docker Application Container Engine.
	Apr 16 18:54:31 pause-334400 dockerd[1029]: time="2024-04-16T18:54:31.646165446Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 18:54:43 pause-334400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 18:54:43 pause-334400 dockerd[1029]: time="2024-04-16T18:54:43.726783596Z" level=info msg="Processing signal 'terminated'"
	Apr 16 18:54:43 pause-334400 dockerd[1029]: time="2024-04-16T18:54:43.729145539Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 18:54:43 pause-334400 dockerd[1029]: time="2024-04-16T18:54:43.729751001Z" level=info msg="Daemon shutdown complete"
	Apr 16 18:54:43 pause-334400 dockerd[1029]: time="2024-04-16T18:54:43.729810007Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 18:54:43 pause-334400 dockerd[1029]: time="2024-04-16T18:54:43.729817408Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 18:54:44 pause-334400 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 18:54:44 pause-334400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 18:54:44 pause-334400 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 18:54:44 pause-334400 dockerd[1335]: time="2024-04-16T18:54:44.804149986Z" level=info msg="Starting up"
	Apr 16 18:54:44 pause-334400 dockerd[1335]: time="2024-04-16T18:54:44.805099483Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 18:54:44 pause-334400 dockerd[1335]: time="2024-04-16T18:54:44.806230800Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1341
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.841400810Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.867879228Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868078248Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868222363Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868244765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868276068Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868298471Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868478389Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868640106Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868662808Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868675009Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868702512Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868838526Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.871967447Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872095760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872264878Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872365288Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872394791Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872418494Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872432795Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872659218Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872709423Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872728825Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872744627Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872765829Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872816434Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873193373Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873332487Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873350189Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873365491Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873382192Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873396994Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873410895Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873426897Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873442399Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873467101Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873483003Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873496604Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873520107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873535408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873550110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873598515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873614416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873629418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873642519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873656621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873670722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873687224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873701525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873714427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873728028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873745330Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873770032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873784234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873797235Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873846440Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873956451Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873974253Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873987355Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.874127969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.874146171Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.874158972Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.874404397Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.874718930Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.874938152Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.875141273Z" level=info msg="containerd successfully booted in 0.037263s"
	Apr 16 18:54:45 pause-334400 dockerd[1335]: time="2024-04-16T18:54:45.851686013Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 18:54:46 pause-334400 dockerd[1335]: time="2024-04-16T18:54:46.074298564Z" level=info msg="Loading containers: start."
	Apr 16 18:54:46 pause-334400 dockerd[1335]: time="2024-04-16T18:54:46.241711649Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 16 18:54:46 pause-334400 dockerd[1335]: time="2024-04-16T18:54:46.318668948Z" level=info msg="Loading containers: done."
	Apr 16 18:54:46 pause-334400 dockerd[1335]: time="2024-04-16T18:54:46.344353285Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 18:54:46 pause-334400 dockerd[1335]: time="2024-04-16T18:54:46.344428992Z" level=info msg="Daemon has completed initialization"
	Apr 16 18:54:46 pause-334400 dockerd[1335]: time="2024-04-16T18:54:46.388485315Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 18:54:46 pause-334400 dockerd[1335]: time="2024-04-16T18:54:46.389323401Z" level=info msg="API listen on [::]:2376"
	Apr 16 18:54:46 pause-334400 systemd[1]: Started Docker Application Container Engine.
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.127086261Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.127188978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.127202280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.127373609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.136092969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.136430026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.136595153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.136835494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.186237768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.186298078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.186323282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.186402795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.189542221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.193913553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.193945559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.194307119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.418505568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.418659794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.418692700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.418970046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.501981949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.502321306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.502403920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.502692368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.524520424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.524822975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.524860181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.525463282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.603062178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.604042543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.608287153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.608622410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:55:16 pause-334400 dockerd[1341]: time="2024-04-16T18:55:16.188903248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:55:16 pause-334400 dockerd[1341]: time="2024-04-16T18:55:16.189745548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:55:16 pause-334400 dockerd[1341]: time="2024-04-16T18:55:16.189869363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:55:16 pause-334400 dockerd[1341]: time="2024-04-16T18:55:16.190791073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:55:16 pause-334400 dockerd[1341]: time="2024-04-16T18:55:16.464195909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:55:16 pause-334400 dockerd[1341]: time="2024-04-16T18:55:16.466458079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:55:16 pause-334400 dockerd[1341]: time="2024-04-16T18:55:16.466615398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:55:16 pause-334400 dockerd[1341]: time="2024-04-16T18:55:16.466914933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:55:17 pause-334400 dockerd[1341]: time="2024-04-16T18:55:17.866904442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:55:17 pause-334400 dockerd[1341]: time="2024-04-16T18:55:17.867047659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:55:17 pause-334400 dockerd[1341]: time="2024-04-16T18:55:17.867063461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:55:17 pause-334400 dockerd[1341]: time="2024-04-16T18:55:17.867627928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:55:18 pause-334400 dockerd[1341]: time="2024-04-16T18:55:18.312857048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:55:18 pause-334400 dockerd[1341]: time="2024-04-16T18:55:18.313069072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:55:18 pause-334400 dockerd[1341]: time="2024-04-16T18:55:18.313098276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:55:18 pause-334400 dockerd[1341]: time="2024-04-16T18:55:18.313263595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:58:58 pause-334400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.621999568Z" level=info msg="Processing signal 'terminated'"
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.807249476Z" level=info msg="ignoring event" container=e8005ff3c4657cb34819165ecb48cb9019b29277906428f55f19ec1d6a57ff43 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.809696686Z" level=info msg="shim disconnected" id=e8005ff3c4657cb34819165ecb48cb9019b29277906428f55f19ec1d6a57ff43 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.809799195Z" level=warning msg="cleaning up after shim disconnected" id=e8005ff3c4657cb34819165ecb48cb9019b29277906428f55f19ec1d6a57ff43 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.809810396Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.819122496Z" level=info msg="ignoring event" container=66c6dba2cbcd35ba5f85c71d9856bd7d33d3d48e40dc7310bc1da2675fc23166 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.819649841Z" level=info msg="shim disconnected" id=66c6dba2cbcd35ba5f85c71d9856bd7d33d3d48e40dc7310bc1da2675fc23166 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.819733048Z" level=warning msg="cleaning up after shim disconnected" id=66c6dba2cbcd35ba5f85c71d9856bd7d33d3d48e40dc7310bc1da2675fc23166 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.819744249Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.843735610Z" level=info msg="ignoring event" container=3894d46335dba82896d8ab9b9be40632713f24aa59ab0693ead9c3d6933353a5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.844802901Z" level=info msg="shim disconnected" id=3894d46335dba82896d8ab9b9be40632713f24aa59ab0693ead9c3d6933353a5 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.845009519Z" level=warning msg="cleaning up after shim disconnected" id=3894d46335dba82896d8ab9b9be40632713f24aa59ab0693ead9c3d6933353a5 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.845067124Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.853795474Z" level=info msg="ignoring event" container=f1384b9ad98a9b3db8874f6c20017f6d7102d3393fb9e72830204ace12d96028 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.854252513Z" level=info msg="shim disconnected" id=f1384b9ad98a9b3db8874f6c20017f6d7102d3393fb9e72830204ace12d96028 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.855241698Z" level=warning msg="cleaning up after shim disconnected" id=f1384b9ad98a9b3db8874f6c20017f6d7102d3393fb9e72830204ace12d96028 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.855400211Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.873912801Z" level=info msg="ignoring event" container=87627c61554c5bf0d835f282207bc860f90e42f658d5b2202186dc1d60805efb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.874005709Z" level=info msg="ignoring event" container=b46f9243c365d23b3c6c7b380d904a7fac7d1d4b2f4f36bdb79758a1419e32a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.875556142Z" level=info msg="shim disconnected" id=b46f9243c365d23b3c6c7b380d904a7fac7d1d4b2f4f36bdb79758a1419e32a3 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.875623148Z" level=warning msg="cleaning up after shim disconnected" id=b46f9243c365d23b3c6c7b380d904a7fac7d1d4b2f4f36bdb79758a1419e32a3 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.875634049Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.886102748Z" level=info msg="ignoring event" container=67fe5eaae833967886117e3cb8f13b0c20faf89273fdf02fa26c3a1135ab2976 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.886416575Z" level=info msg="shim disconnected" id=67fe5eaae833967886117e3cb8f13b0c20faf89273fdf02fa26c3a1135ab2976 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.888649567Z" level=warning msg="cleaning up after shim disconnected" id=67fe5eaae833967886117e3cb8f13b0c20faf89273fdf02fa26c3a1135ab2976 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.888916790Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.897011585Z" level=info msg="shim disconnected" id=87627c61554c5bf0d835f282207bc860f90e42f658d5b2202186dc1d60805efb namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.898670827Z" level=warning msg="cleaning up after shim disconnected" id=87627c61554c5bf0d835f282207bc860f90e42f658d5b2202186dc1d60805efb namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.898796338Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.915793698Z" level=info msg="shim disconnected" id=fb1de6e7c040b967267f2fc48d4804c780ba0e9dabf724bda72fab5dd6f7aca3 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.915921709Z" level=warning msg="cleaning up after shim disconnected" id=fb1de6e7c040b967267f2fc48d4804c780ba0e9dabf724bda72fab5dd6f7aca3 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.915934710Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.916987900Z" level=info msg="shim disconnected" id=289b9a8be28ed2b175c9a155d6fd59dc07056b4b3bc1767b7b727eaaa8059341 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.917253823Z" level=info msg="ignoring event" container=fb1de6e7c040b967267f2fc48d4804c780ba0e9dabf724bda72fab5dd6f7aca3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.917371733Z" level=info msg="ignoring event" container=289b9a8be28ed2b175c9a155d6fd59dc07056b4b3bc1767b7b727eaaa8059341 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.917404436Z" level=info msg="ignoring event" container=56bf229084a1798cc6506e97a2e264b00084e846bb6db8afc46edcfdf5762123 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.917036804Z" level=warning msg="cleaning up after shim disconnected" id=289b9a8be28ed2b175c9a155d6fd59dc07056b4b3bc1767b7b727eaaa8059341 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.917473342Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.932478731Z" level=info msg="shim disconnected" id=56bf229084a1798cc6506e97a2e264b00084e846bb6db8afc46edcfdf5762123 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.932741253Z" level=warning msg="cleaning up after shim disconnected" id=56bf229084a1798cc6506e97a2e264b00084e846bb6db8afc46edcfdf5762123 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.932840562Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:59:03 pause-334400 dockerd[1335]: time="2024-04-16T18:59:03.747004588Z" level=info msg="ignoring event" container=1cac56013cab1560f2c8f00ad8845165e216a0fd5bbd8a774dd14ba8a42160a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:59:03 pause-334400 dockerd[1341]: time="2024-04-16T18:59:03.749992145Z" level=info msg="shim disconnected" id=1cac56013cab1560f2c8f00ad8845165e216a0fd5bbd8a774dd14ba8a42160a8 namespace=moby
	Apr 16 18:59:03 pause-334400 dockerd[1341]: time="2024-04-16T18:59:03.756333890Z" level=warning msg="cleaning up after shim disconnected" id=1cac56013cab1560f2c8f00ad8845165e216a0fd5bbd8a774dd14ba8a42160a8 namespace=moby
	Apr 16 18:59:03 pause-334400 dockerd[1341]: time="2024-04-16T18:59:03.756420297Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:59:08 pause-334400 dockerd[1335]: time="2024-04-16T18:59:08.721130426Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=9ee1d12a013f97405b30b2392817c7dcc4a9bb16781d51649279c155d91f9f60
	Apr 16 18:59:08 pause-334400 dockerd[1335]: time="2024-04-16T18:59:08.760586284Z" level=info msg="ignoring event" container=9ee1d12a013f97405b30b2392817c7dcc4a9bb16781d51649279c155d91f9f60 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:59:08 pause-334400 dockerd[1341]: time="2024-04-16T18:59:08.760981171Z" level=info msg="shim disconnected" id=9ee1d12a013f97405b30b2392817c7dcc4a9bb16781d51649279c155d91f9f60 namespace=moby
	Apr 16 18:59:08 pause-334400 dockerd[1341]: time="2024-04-16T18:59:08.761516753Z" level=warning msg="cleaning up after shim disconnected" id=9ee1d12a013f97405b30b2392817c7dcc4a9bb16781d51649279c155d91f9f60 namespace=moby
	Apr 16 18:59:08 pause-334400 dockerd[1341]: time="2024-04-16T18:59:08.761627349Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:59:08 pause-334400 dockerd[1341]: time="2024-04-16T18:59:08.780697501Z" level=warning msg="cleanup warnings time=\"2024-04-16T18:59:08Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 16 18:59:08 pause-334400 dockerd[1335]: time="2024-04-16T18:59:08.805663352Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 18:59:08 pause-334400 dockerd[1335]: time="2024-04-16T18:59:08.806727216Z" level=info msg="Daemon shutdown complete"
	Apr 16 18:59:08 pause-334400 dockerd[1335]: time="2024-04-16T18:59:08.806900910Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 18:59:08 pause-334400 dockerd[1335]: time="2024-04-16T18:59:08.806912309Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 18:59:09 pause-334400 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 18:59:09 pause-334400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 18:59:09 pause-334400 systemd[1]: docker.service: Consumed 6.733s CPU time.
	Apr 16 18:59:09 pause-334400 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 18:59:09 pause-334400 dockerd[4154]: time="2024-04-16T18:59:09.887046510Z" level=info msg="Starting up"
	Apr 16 19:00:09 pause-334400 dockerd[4154]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 16 19:00:09 pause-334400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 16 19:00:09 pause-334400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 16 19:00:09 pause-334400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0416 19:00:09.815047    9908 out.go:239] * 
	* 
	W0416 19:00:09.816370    9908 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 19:00:09.817017    9908 out.go:177] 

                                                
                                                
** /stderr **
pause_test.go:94: failed to second start a running minikube with args: "out/minikube-windows-amd64.exe start -p pause-334400 --alsologtostderr -v=1 --driver=hyperv" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-334400 -n pause-334400
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-334400 -n pause-334400: exit status 2 (11.2458375s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 19:00:10.190980   11792 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-334400 logs -n 25
E0416 19:01:07.220187    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-334400 logs -n 25: (1m49.1383312s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|---------------------------|-------------------|----------------|---------------------|---------------------|
	| Command |               Args                |          Profile          |       User        |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------|---------------------------|-------------------|----------------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-833900      | kubernetes-upgrade-833900 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:40 UTC | 16 Apr 24 18:45 UTC |
	|         | --memory=2200                     |                           |                   |                |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                           |                   |                |                     |                     |
	|         | --alsologtostderr -v=1            |                           |                   |                |                     |                     |
	|         | --driver=hyperv                   |                           |                   |                |                     |                     |
	| start   | -p offline-docker-833900          | offline-docker-833900     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:40 UTC | 16 Apr 24 18:43 UTC |
	|         | --alsologtostderr -v=1            |                           |                   |                |                     |                     |
	|         | --memory=2048 --wait=true         |                           |                   |                |                     |                     |
	|         | --driver=hyperv                   |                           |                   |                |                     |                     |
	| start   | -p NoKubernetes-833900            | NoKubernetes-833900       | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:40 UTC |                     |
	|         | --no-kubernetes                   |                           |                   |                |                     |                     |
	|         | --kubernetes-version=1.20         |                           |                   |                |                     |                     |
	|         | --driver=hyperv                   |                           |                   |                |                     |                     |
	| start   | -p NoKubernetes-833900            | NoKubernetes-833900       | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:40 UTC |                     |
	|         | --driver=hyperv                   |                           |                   |                |                     |                     |
	| delete  | -p offline-docker-833900          | offline-docker-833900     | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:43 UTC | 16 Apr 24 18:44 UTC |
	| start   | -p stopped-upgrade-280600         | minikube                  | minikube5\jenkins | v1.26.0        | 16 Apr 24 18:44 GMT | 16 Apr 24 18:49 GMT |
	|         | --memory=2200                     |                           |                   |                |                     |                     |
	|         | --vm-driver=hyperv                |                           |                   |                |                     |                     |
	| delete  | -p NoKubernetes-833900            | NoKubernetes-833900       | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:45 UTC | 16 Apr 24 18:45 UTC |
	| start   | -p running-upgrade-360500         | minikube                  | minikube5\jenkins | v1.26.0        | 16 Apr 24 18:45 GMT | 16 Apr 24 18:51 GMT |
	|         | --memory=2200                     |                           |                   |                |                     |                     |
	|         | --vm-driver=hyperv                |                           |                   |                |                     |                     |
	| stop    | -p kubernetes-upgrade-833900      | kubernetes-upgrade-833900 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:45 UTC | 16 Apr 24 18:46 UTC |
	| start   | -p kubernetes-upgrade-833900      | kubernetes-upgrade-833900 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:46 UTC | 16 Apr 24 18:53 UTC |
	|         | --memory=2200                     |                           |                   |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2 |                           |                   |                |                     |                     |
	|         | --alsologtostderr -v=1            |                           |                   |                |                     |                     |
	|         | --driver=hyperv                   |                           |                   |                |                     |                     |
	| ssh     | force-systemd-flag-833900         | force-systemd-flag-833900 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:47 UTC | 16 Apr 24 18:47 UTC |
	|         | ssh docker info --format          |                           |                   |                |                     |                     |
	|         | {{.CgroupDriver}}                 |                           |                   |                |                     |                     |
	| delete  | -p force-systemd-flag-833900      | force-systemd-flag-833900 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:47 UTC | 16 Apr 24 18:48 UTC |
	| start   | -p pause-334400 --memory=2048     | pause-334400              | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:48 UTC | 16 Apr 24 18:55 UTC |
	|         | --install-addons=false            |                           |                   |                |                     |                     |
	|         | --wait=all --driver=hyperv        |                           |                   |                |                     |                     |
	| stop    | stopped-upgrade-280600 stop       | minikube                  | minikube5\jenkins | v1.26.0        | 16 Apr 24 18:49 GMT | 16 Apr 24 18:50 GMT |
	| start   | -p stopped-upgrade-280600         | stopped-upgrade-280600    | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:50 UTC | 16 Apr 24 18:56 UTC |
	|         | --memory=2200                     |                           |                   |                |                     |                     |
	|         | --alsologtostderr -v=1            |                           |                   |                |                     |                     |
	|         | --driver=hyperv                   |                           |                   |                |                     |                     |
	| start   | -p running-upgrade-360500         | running-upgrade-360500    | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:51 UTC | 16 Apr 24 18:58 UTC |
	|         | --memory=2200                     |                           |                   |                |                     |                     |
	|         | --alsologtostderr -v=1            |                           |                   |                |                     |                     |
	|         | --driver=hyperv                   |                           |                   |                |                     |                     |
	| start   | -p kubernetes-upgrade-833900      | kubernetes-upgrade-833900 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:53 UTC |                     |
	|         | --memory=2200                     |                           |                   |                |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                           |                   |                |                     |                     |
	|         | --driver=hyperv                   |                           |                   |                |                     |                     |
	| start   | -p kubernetes-upgrade-833900      | kubernetes-upgrade-833900 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:53 UTC | 16 Apr 24 18:58 UTC |
	|         | --memory=2200                     |                           |                   |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2 |                           |                   |                |                     |                     |
	|         | --alsologtostderr -v=1            |                           |                   |                |                     |                     |
	|         | --driver=hyperv                   |                           |                   |                |                     |                     |
	| start   | -p pause-334400                   | pause-334400              | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:55 UTC |                     |
	|         | --alsologtostderr -v=1            |                           |                   |                |                     |                     |
	|         | --driver=hyperv                   |                           |                   |                |                     |                     |
	| delete  | -p stopped-upgrade-280600         | stopped-upgrade-280600    | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:56 UTC | 16 Apr 24 18:57 UTC |
	| start   | -p cert-expiration-396200         | cert-expiration-396200    | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:57 UTC |                     |
	|         | --memory=2048                     |                           |                   |                |                     |                     |
	|         | --cert-expiration=3m              |                           |                   |                |                     |                     |
	|         | --driver=hyperv                   |                           |                   |                |                     |                     |
	| delete  | -p kubernetes-upgrade-833900      | kubernetes-upgrade-833900 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:58 UTC | 16 Apr 24 18:59 UTC |
	| delete  | -p running-upgrade-360500         | running-upgrade-360500    | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:58 UTC | 16 Apr 24 18:59 UTC |
	| start   | -p docker-flags-442400            | docker-flags-442400       | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:59 UTC |                     |
	|         | --cache-images=false              |                           |                   |                |                     |                     |
	|         | --memory=2048                     |                           |                   |                |                     |                     |
	|         | --install-addons=false            |                           |                   |                |                     |                     |
	|         | --wait=false                      |                           |                   |                |                     |                     |
	|         | --docker-env=FOO=BAR              |                           |                   |                |                     |                     |
	|         | --docker-env=BAZ=BAT              |                           |                   |                |                     |                     |
	|         | --docker-opt=debug                |                           |                   |                |                     |                     |
	|         | --docker-opt=icc=true             |                           |                   |                |                     |                     |
	|         | --alsologtostderr -v=5            |                           |                   |                |                     |                     |
	|         | --driver=hyperv                   |                           |                   |                |                     |                     |
	| start   | -p cert-options-104100            | cert-options-104100       | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 18:59 UTC |                     |
	|         | --memory=2048                     |                           |                   |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1         |                           |                   |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15     |                           |                   |                |                     |                     |
	|         | --apiserver-names=localhost       |                           |                   |                |                     |                     |
	|         | --apiserver-names=www.google.com  |                           |                   |                |                     |                     |
	|         | --apiserver-port=8555             |                           |                   |                |                     |                     |
	|         | --driver=hyperv                   |                           |                   |                |                     |                     |
	|---------|-----------------------------------|---------------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 18:59:53
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 18:59:53.376201    6480 out.go:291] Setting OutFile to fd 1768 ...
	I0416 18:59:53.376721    6480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 18:59:53.376721    6480 out.go:304] Setting ErrFile to fd 1772...
	I0416 18:59:53.376721    6480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 18:59:53.397552    6480 out.go:298] Setting JSON to false
	I0416 18:59:53.402536    6480 start.go:129] hostinfo: {"hostname":"minikube5","uptime":31622,"bootTime":1713262370,"procs":207,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0416 18:59:53.402536    6480 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0416 18:59:53.403547    6480 out.go:177] * [cert-options-104100] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0416 18:59:53.404575    6480 notify.go:220] Checking for updates...
	I0416 18:59:53.404575    6480 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 18:59:53.406204    6480 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 18:59:53.406913    6480 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0416 18:59:53.407612    6480 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 18:59:53.407975    6480 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 18:59:53.593507    4968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:59:53.594511    4968 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:59:53.594563    4968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-396200 ).networkadapters[0]).ipaddresses[0]
	I0416 18:59:56.046056    4968 main.go:141] libmachine: [stdout =====>] : 172.19.86.30
	
	I0416 18:59:56.046056    4968 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:59:56.046679    4968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-396200 ).state
	I0416 18:59:53.410018    6480 config.go:182] Loaded profile config "cert-expiration-396200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:59:53.410157    6480 config.go:182] Loaded profile config "docker-flags-442400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:59:53.410785    6480 config.go:182] Loaded profile config "pause-334400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:59:53.410785    6480 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 18:59:58.523385    6480 out.go:177] * Using the hyperv driver based on user configuration
	I0416 18:59:58.524018    6480 start.go:297] selected driver: hyperv
	I0416 18:59:58.524018    6480 start.go:901] validating driver "hyperv" against <nil>
	I0416 18:59:58.524018    6480 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 18:59:58.581028    6480 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 18:59:58.582027    6480 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0416 18:59:58.582027    6480 cni.go:84] Creating CNI manager for ""
	I0416 18:59:58.582027    6480 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0416 18:59:58.582027    6480 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0416 18:59:58.582027    6480 start.go:340] cluster config:
	{Name:cert-options-104100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:cert-options-104100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.
0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 18:59:58.583027    6480 iso.go:125] acquiring lock: {Name:mka3f8eef32f5becd06d05d1d837c2a92a8fa70c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 18:59:58.584027    6480 out.go:177] * Starting "cert-options-104100" primary control-plane node in "cert-options-104100" cluster
	I0416 18:59:58.093294    4968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:59:58.093294    4968 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:59:58.093489    4968 machine.go:94] provisionDockerMachine start ...
	I0416 18:59:58.093589    4968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-396200 ).state
	I0416 19:00:00.093197    4968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 19:00:00.093197    4968 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:00:00.093433    4968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-396200 ).networkadapters[0]).ipaddresses[0]
	I0416 18:59:58.584027    6480 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 18:59:58.585033    6480 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0416 18:59:58.585033    6480 cache.go:56] Caching tarball of preloaded images
	I0416 18:59:58.585033    6480 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0416 18:59:58.585033    6480 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 18:59:58.585033    6480 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\cert-options-104100\config.json ...
	I0416 18:59:58.585033    6480 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\cert-options-104100\config.json: {Name:mk52c9dd8d46010ba4d9b2b07ff123a8ae1a5940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:59:58.586030    6480 start.go:360] acquireMachinesLock for cert-options-104100: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 19:00:02.478667    4968 main.go:141] libmachine: [stdout =====>] : 172.19.86.30
	
	I0416 19:00:02.478667    4968 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:00:02.483796    4968 main.go:141] libmachine: Using SSH client type: native
	I0416 19:00:02.484390    4968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.86.30 22 <nil> <nil>}
	I0416 19:00:02.484390    4968 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 19:00:02.623821    4968 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 19:00:02.624018    4968 buildroot.go:166] provisioning hostname "cert-expiration-396200"
	I0416 19:00:02.624018    4968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-396200 ).state
	I0416 19:00:04.670958    4968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 19:00:04.670958    4968 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:00:04.671977    4968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-396200 ).networkadapters[0]).ipaddresses[0]
	I0416 19:00:09.750526    9908 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3189309s)
	I0416 19:00:09.759492    9908 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0416 19:00:09.812308    9908 out.go:177] 
	W0416 19:00:09.813383    9908 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 16 18:53:59 pause-334400 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 18:53:59 pause-334400 dockerd[672]: time="2024-04-16T18:53:59.144054469Z" level=info msg="Starting up"
	Apr 16 18:53:59 pause-334400 dockerd[672]: time="2024-04-16T18:53:59.145438882Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 18:53:59 pause-334400 dockerd[672]: time="2024-04-16T18:53:59.150364118Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.178528121Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.203998637Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.204185633Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.204349017Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.204367727Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.204447968Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.204537114Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.204733015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.204837369Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.204860380Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.204871986Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.204957130Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.205488704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.208663139Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.208759288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.208910166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.209008617Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.209184007Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.209327081Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.209342388Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.218007651Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.218335920Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.218417662Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.218521015Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.218540925Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.218658986Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.218906213Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219015870Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219115121Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219132430Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219227479Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219244387Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219255893Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219269000Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219281807Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219302918Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219316325Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219327530Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219351543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219363849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219374855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219386761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219397366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219413875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219429183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219448793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219462000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219475006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219489013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219499719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219511525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219527533Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219553146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219578259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219589265Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219634788Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219649796Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219660502Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219669907Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219753249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219847998Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.219860605Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.220060808Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.220138348Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.220292127Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 18:53:59 pause-334400 dockerd[679]: time="2024-04-16T18:53:59.220331447Z" level=info msg="containerd successfully booted in 0.042957s"
	Apr 16 18:54:00 pause-334400 dockerd[672]: time="2024-04-16T18:54:00.195972415Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 18:54:00 pause-334400 dockerd[672]: time="2024-04-16T18:54:00.214556102Z" level=info msg="Loading containers: start."
	Apr 16 18:54:00 pause-334400 dockerd[672]: time="2024-04-16T18:54:00.427505344Z" level=info msg="Loading containers: done."
	Apr 16 18:54:00 pause-334400 dockerd[672]: time="2024-04-16T18:54:00.444768246Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 18:54:00 pause-334400 dockerd[672]: time="2024-04-16T18:54:00.444948435Z" level=info msg="Daemon has completed initialization"
	Apr 16 18:54:00 pause-334400 dockerd[672]: time="2024-04-16T18:54:00.510686608Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 18:54:00 pause-334400 dockerd[672]: time="2024-04-16T18:54:00.511323822Z" level=info msg="API listen on [::]:2376"
	Apr 16 18:54:00 pause-334400 systemd[1]: Started Docker Application Container Engine.
	Apr 16 18:54:29 pause-334400 dockerd[672]: time="2024-04-16T18:54:29.208427817Z" level=info msg="Processing signal 'terminated'"
	Apr 16 18:54:29 pause-334400 dockerd[672]: time="2024-04-16T18:54:29.210154695Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 18:54:29 pause-334400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 18:54:29 pause-334400 dockerd[672]: time="2024-04-16T18:54:29.211210303Z" level=info msg="Daemon shutdown complete"
	Apr 16 18:54:29 pause-334400 dockerd[672]: time="2024-04-16T18:54:29.211523135Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 18:54:29 pause-334400 dockerd[672]: time="2024-04-16T18:54:29.211716955Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 18:54:30 pause-334400 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 18:54:30 pause-334400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 18:54:30 pause-334400 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 18:54:30 pause-334400 dockerd[1029]: time="2024-04-16T18:54:30.285118238Z" level=info msg="Starting up"
	Apr 16 18:54:30 pause-334400 dockerd[1029]: time="2024-04-16T18:54:30.286750405Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 18:54:30 pause-334400 dockerd[1029]: time="2024-04-16T18:54:30.291470890Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1035
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.337283092Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.364803817Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.364860623Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.364902127Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.364916829Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.364944432Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.364957533Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.365293067Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.365379376Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.365398478Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.365489788Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.365525291Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.365711010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.368705618Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.368808228Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.368975545Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369068855Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369099058Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369117060Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369130961Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369259274Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369306179Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369322781Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369338483Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369353384Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369403889Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369761626Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369901040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.369988949Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370010852Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370026353Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370040155Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370053356Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370069358Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370084159Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370096960Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370109762Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370121963Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370141965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370163267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370177669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370191170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370203871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370216973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370229974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370242675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370257677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370272879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370284580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370296081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370308282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370323684Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370344586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370356587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370368488Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370480200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370778730Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370869740Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370885841Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.370951548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.371035557Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.371049758Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.371400894Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.371684223Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.371827338Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 18:54:30 pause-334400 dockerd[1035]: time="2024-04-16T18:54:30.372008057Z" level=info msg="containerd successfully booted in 0.038797s"
	Apr 16 18:54:31 pause-334400 dockerd[1029]: time="2024-04-16T18:54:31.335093515Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 18:54:31 pause-334400 dockerd[1029]: time="2024-04-16T18:54:31.352183570Z" level=info msg="Loading containers: start."
	Apr 16 18:54:31 pause-334400 dockerd[1029]: time="2024-04-16T18:54:31.516459132Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 16 18:54:31 pause-334400 dockerd[1029]: time="2024-04-16T18:54:31.586774250Z" level=info msg="Loading containers: done."
	Apr 16 18:54:31 pause-334400 dockerd[1029]: time="2024-04-16T18:54:31.602106924Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 18:54:31 pause-334400 dockerd[1029]: time="2024-04-16T18:54:31.602274641Z" level=info msg="Daemon has completed initialization"
	Apr 16 18:54:31 pause-334400 dockerd[1029]: time="2024-04-16T18:54:31.646147544Z" level=info msg="API listen on [::]:2376"
	Apr 16 18:54:31 pause-334400 systemd[1]: Started Docker Application Container Engine.
	Apr 16 18:54:31 pause-334400 dockerd[1029]: time="2024-04-16T18:54:31.646165446Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 18:54:43 pause-334400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 18:54:43 pause-334400 dockerd[1029]: time="2024-04-16T18:54:43.726783596Z" level=info msg="Processing signal 'terminated'"
	Apr 16 18:54:43 pause-334400 dockerd[1029]: time="2024-04-16T18:54:43.729145539Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 18:54:43 pause-334400 dockerd[1029]: time="2024-04-16T18:54:43.729751001Z" level=info msg="Daemon shutdown complete"
	Apr 16 18:54:43 pause-334400 dockerd[1029]: time="2024-04-16T18:54:43.729810007Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 18:54:43 pause-334400 dockerd[1029]: time="2024-04-16T18:54:43.729817408Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 18:54:44 pause-334400 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 18:54:44 pause-334400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 18:54:44 pause-334400 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 18:54:44 pause-334400 dockerd[1335]: time="2024-04-16T18:54:44.804149986Z" level=info msg="Starting up"
	Apr 16 18:54:44 pause-334400 dockerd[1335]: time="2024-04-16T18:54:44.805099483Z" level=info msg="containerd not running, starting managed containerd"
	Apr 16 18:54:44 pause-334400 dockerd[1335]: time="2024-04-16T18:54:44.806230800Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1341
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.841400810Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.867879228Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868078248Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868222363Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868244765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868276068Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868298471Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868478389Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868640106Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868662808Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868675009Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868702512Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.868838526Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.871967447Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872095760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872264878Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872365288Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872394791Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872418494Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872432795Z" level=info msg="metadata content store policy set" policy=shared
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872659218Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872709423Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872728825Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872744627Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872765829Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.872816434Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873193373Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873332487Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873350189Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873365491Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873382192Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873396994Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873410895Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873426897Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873442399Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873467101Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873483003Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873496604Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873520107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873535408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873550110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873598515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873614416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873629418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873642519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873656621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873670722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873687224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873701525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873714427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873728028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873745330Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873770032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873784234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873797235Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873846440Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873956451Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873974253Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.873987355Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.874127969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.874146171Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.874158972Z" level=info msg="NRI interface is disabled by configuration."
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.874404397Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.874718930Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.874938152Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 16 18:54:44 pause-334400 dockerd[1341]: time="2024-04-16T18:54:44.875141273Z" level=info msg="containerd successfully booted in 0.037263s"
	Apr 16 18:54:45 pause-334400 dockerd[1335]: time="2024-04-16T18:54:45.851686013Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 16 18:54:46 pause-334400 dockerd[1335]: time="2024-04-16T18:54:46.074298564Z" level=info msg="Loading containers: start."
	Apr 16 18:54:46 pause-334400 dockerd[1335]: time="2024-04-16T18:54:46.241711649Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 16 18:54:46 pause-334400 dockerd[1335]: time="2024-04-16T18:54:46.318668948Z" level=info msg="Loading containers: done."
	Apr 16 18:54:46 pause-334400 dockerd[1335]: time="2024-04-16T18:54:46.344353285Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 16 18:54:46 pause-334400 dockerd[1335]: time="2024-04-16T18:54:46.344428992Z" level=info msg="Daemon has completed initialization"
	Apr 16 18:54:46 pause-334400 dockerd[1335]: time="2024-04-16T18:54:46.388485315Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 16 18:54:46 pause-334400 dockerd[1335]: time="2024-04-16T18:54:46.389323401Z" level=info msg="API listen on [::]:2376"
	Apr 16 18:54:46 pause-334400 systemd[1]: Started Docker Application Container Engine.
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.127086261Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.127188978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.127202280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.127373609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.136092969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.136430026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.136595153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.136835494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.186237768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.186298078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.186323282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.186402795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.189542221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.193913553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.193945559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.194307119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.418505568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.418659794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.418692700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.418970046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.501981949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.502321306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.502403920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.502692368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.524520424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.524822975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.524860181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.525463282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.603062178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.604042543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.608287153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:54:55 pause-334400 dockerd[1341]: time="2024-04-16T18:54:55.608622410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:55:16 pause-334400 dockerd[1341]: time="2024-04-16T18:55:16.188903248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:55:16 pause-334400 dockerd[1341]: time="2024-04-16T18:55:16.189745548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:55:16 pause-334400 dockerd[1341]: time="2024-04-16T18:55:16.189869363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:55:16 pause-334400 dockerd[1341]: time="2024-04-16T18:55:16.190791073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:55:16 pause-334400 dockerd[1341]: time="2024-04-16T18:55:16.464195909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:55:16 pause-334400 dockerd[1341]: time="2024-04-16T18:55:16.466458079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:55:16 pause-334400 dockerd[1341]: time="2024-04-16T18:55:16.466615398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:55:16 pause-334400 dockerd[1341]: time="2024-04-16T18:55:16.466914933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:55:17 pause-334400 dockerd[1341]: time="2024-04-16T18:55:17.866904442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:55:17 pause-334400 dockerd[1341]: time="2024-04-16T18:55:17.867047659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:55:17 pause-334400 dockerd[1341]: time="2024-04-16T18:55:17.867063461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:55:17 pause-334400 dockerd[1341]: time="2024-04-16T18:55:17.867627928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:55:18 pause-334400 dockerd[1341]: time="2024-04-16T18:55:18.312857048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 16 18:55:18 pause-334400 dockerd[1341]: time="2024-04-16T18:55:18.313069072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 16 18:55:18 pause-334400 dockerd[1341]: time="2024-04-16T18:55:18.313098276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:55:18 pause-334400 dockerd[1341]: time="2024-04-16T18:55:18.313263595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 16 18:58:58 pause-334400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.621999568Z" level=info msg="Processing signal 'terminated'"
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.807249476Z" level=info msg="ignoring event" container=e8005ff3c4657cb34819165ecb48cb9019b29277906428f55f19ec1d6a57ff43 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.809696686Z" level=info msg="shim disconnected" id=e8005ff3c4657cb34819165ecb48cb9019b29277906428f55f19ec1d6a57ff43 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.809799195Z" level=warning msg="cleaning up after shim disconnected" id=e8005ff3c4657cb34819165ecb48cb9019b29277906428f55f19ec1d6a57ff43 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.809810396Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.819122496Z" level=info msg="ignoring event" container=66c6dba2cbcd35ba5f85c71d9856bd7d33d3d48e40dc7310bc1da2675fc23166 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.819649841Z" level=info msg="shim disconnected" id=66c6dba2cbcd35ba5f85c71d9856bd7d33d3d48e40dc7310bc1da2675fc23166 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.819733048Z" level=warning msg="cleaning up after shim disconnected" id=66c6dba2cbcd35ba5f85c71d9856bd7d33d3d48e40dc7310bc1da2675fc23166 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.819744249Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.843735610Z" level=info msg="ignoring event" container=3894d46335dba82896d8ab9b9be40632713f24aa59ab0693ead9c3d6933353a5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.844802901Z" level=info msg="shim disconnected" id=3894d46335dba82896d8ab9b9be40632713f24aa59ab0693ead9c3d6933353a5 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.845009519Z" level=warning msg="cleaning up after shim disconnected" id=3894d46335dba82896d8ab9b9be40632713f24aa59ab0693ead9c3d6933353a5 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.845067124Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.853795474Z" level=info msg="ignoring event" container=f1384b9ad98a9b3db8874f6c20017f6d7102d3393fb9e72830204ace12d96028 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.854252513Z" level=info msg="shim disconnected" id=f1384b9ad98a9b3db8874f6c20017f6d7102d3393fb9e72830204ace12d96028 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.855241698Z" level=warning msg="cleaning up after shim disconnected" id=f1384b9ad98a9b3db8874f6c20017f6d7102d3393fb9e72830204ace12d96028 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.855400211Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.873912801Z" level=info msg="ignoring event" container=87627c61554c5bf0d835f282207bc860f90e42f658d5b2202186dc1d60805efb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.874005709Z" level=info msg="ignoring event" container=b46f9243c365d23b3c6c7b380d904a7fac7d1d4b2f4f36bdb79758a1419e32a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.875556142Z" level=info msg="shim disconnected" id=b46f9243c365d23b3c6c7b380d904a7fac7d1d4b2f4f36bdb79758a1419e32a3 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.875623148Z" level=warning msg="cleaning up after shim disconnected" id=b46f9243c365d23b3c6c7b380d904a7fac7d1d4b2f4f36bdb79758a1419e32a3 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.875634049Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.886102748Z" level=info msg="ignoring event" container=67fe5eaae833967886117e3cb8f13b0c20faf89273fdf02fa26c3a1135ab2976 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.886416575Z" level=info msg="shim disconnected" id=67fe5eaae833967886117e3cb8f13b0c20faf89273fdf02fa26c3a1135ab2976 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.888649567Z" level=warning msg="cleaning up after shim disconnected" id=67fe5eaae833967886117e3cb8f13b0c20faf89273fdf02fa26c3a1135ab2976 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.888916790Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.897011585Z" level=info msg="shim disconnected" id=87627c61554c5bf0d835f282207bc860f90e42f658d5b2202186dc1d60805efb namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.898670827Z" level=warning msg="cleaning up after shim disconnected" id=87627c61554c5bf0d835f282207bc860f90e42f658d5b2202186dc1d60805efb namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.898796338Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.915793698Z" level=info msg="shim disconnected" id=fb1de6e7c040b967267f2fc48d4804c780ba0e9dabf724bda72fab5dd6f7aca3 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.915921709Z" level=warning msg="cleaning up after shim disconnected" id=fb1de6e7c040b967267f2fc48d4804c780ba0e9dabf724bda72fab5dd6f7aca3 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.915934710Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.916987900Z" level=info msg="shim disconnected" id=289b9a8be28ed2b175c9a155d6fd59dc07056b4b3bc1767b7b727eaaa8059341 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.917253823Z" level=info msg="ignoring event" container=fb1de6e7c040b967267f2fc48d4804c780ba0e9dabf724bda72fab5dd6f7aca3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.917371733Z" level=info msg="ignoring event" container=289b9a8be28ed2b175c9a155d6fd59dc07056b4b3bc1767b7b727eaaa8059341 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:58:58 pause-334400 dockerd[1335]: time="2024-04-16T18:58:58.917404436Z" level=info msg="ignoring event" container=56bf229084a1798cc6506e97a2e264b00084e846bb6db8afc46edcfdf5762123 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.917036804Z" level=warning msg="cleaning up after shim disconnected" id=289b9a8be28ed2b175c9a155d6fd59dc07056b4b3bc1767b7b727eaaa8059341 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.917473342Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.932478731Z" level=info msg="shim disconnected" id=56bf229084a1798cc6506e97a2e264b00084e846bb6db8afc46edcfdf5762123 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.932741253Z" level=warning msg="cleaning up after shim disconnected" id=56bf229084a1798cc6506e97a2e264b00084e846bb6db8afc46edcfdf5762123 namespace=moby
	Apr 16 18:58:58 pause-334400 dockerd[1341]: time="2024-04-16T18:58:58.932840562Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:59:03 pause-334400 dockerd[1335]: time="2024-04-16T18:59:03.747004588Z" level=info msg="ignoring event" container=1cac56013cab1560f2c8f00ad8845165e216a0fd5bbd8a774dd14ba8a42160a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:59:03 pause-334400 dockerd[1341]: time="2024-04-16T18:59:03.749992145Z" level=info msg="shim disconnected" id=1cac56013cab1560f2c8f00ad8845165e216a0fd5bbd8a774dd14ba8a42160a8 namespace=moby
	Apr 16 18:59:03 pause-334400 dockerd[1341]: time="2024-04-16T18:59:03.756333890Z" level=warning msg="cleaning up after shim disconnected" id=1cac56013cab1560f2c8f00ad8845165e216a0fd5bbd8a774dd14ba8a42160a8 namespace=moby
	Apr 16 18:59:03 pause-334400 dockerd[1341]: time="2024-04-16T18:59:03.756420297Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:59:08 pause-334400 dockerd[1335]: time="2024-04-16T18:59:08.721130426Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=9ee1d12a013f97405b30b2392817c7dcc4a9bb16781d51649279c155d91f9f60
	Apr 16 18:59:08 pause-334400 dockerd[1335]: time="2024-04-16T18:59:08.760586284Z" level=info msg="ignoring event" container=9ee1d12a013f97405b30b2392817c7dcc4a9bb16781d51649279c155d91f9f60 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 16 18:59:08 pause-334400 dockerd[1341]: time="2024-04-16T18:59:08.760981171Z" level=info msg="shim disconnected" id=9ee1d12a013f97405b30b2392817c7dcc4a9bb16781d51649279c155d91f9f60 namespace=moby
	Apr 16 18:59:08 pause-334400 dockerd[1341]: time="2024-04-16T18:59:08.761516753Z" level=warning msg="cleaning up after shim disconnected" id=9ee1d12a013f97405b30b2392817c7dcc4a9bb16781d51649279c155d91f9f60 namespace=moby
	Apr 16 18:59:08 pause-334400 dockerd[1341]: time="2024-04-16T18:59:08.761627349Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 16 18:59:08 pause-334400 dockerd[1341]: time="2024-04-16T18:59:08.780697501Z" level=warning msg="cleanup warnings time=\"2024-04-16T18:59:08Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 16 18:59:08 pause-334400 dockerd[1335]: time="2024-04-16T18:59:08.805663352Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 16 18:59:08 pause-334400 dockerd[1335]: time="2024-04-16T18:59:08.806727216Z" level=info msg="Daemon shutdown complete"
	Apr 16 18:59:08 pause-334400 dockerd[1335]: time="2024-04-16T18:59:08.806900910Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 16 18:59:08 pause-334400 dockerd[1335]: time="2024-04-16T18:59:08.806912309Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 16 18:59:09 pause-334400 systemd[1]: docker.service: Deactivated successfully.
	Apr 16 18:59:09 pause-334400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 18:59:09 pause-334400 systemd[1]: docker.service: Consumed 6.733s CPU time.
	Apr 16 18:59:09 pause-334400 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 18:59:09 pause-334400 dockerd[4154]: time="2024-04-16T18:59:09.887046510Z" level=info msg="Starting up"
	Apr 16 19:00:09 pause-334400 dockerd[4154]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 16 19:00:09 pause-334400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 16 19:00:09 pause-334400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 16 19:00:09 pause-334400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0416 19:00:09.815047    9908 out.go:239] * 
	W0416 19:00:09.816370    9908 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 19:00:09.817017    9908 out.go:177] 
	I0416 19:00:07.046745    4968 main.go:141] libmachine: [stdout =====>] : 172.19.86.30
	
	I0416 19:00:07.046745    4968 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:00:07.051472    4968 main.go:141] libmachine: Using SSH client type: native
	I0416 19:00:07.051894    4968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.86.30 22 <nil> <nil>}
	I0416 19:00:07.051894    4968 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-396200 && echo "cert-expiration-396200" | sudo tee /etc/hostname
	I0416 19:00:07.204436    4968 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-396200
	
	I0416 19:00:07.204539    4968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-396200 ).state
	I0416 19:00:09.171478    4968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 19:00:09.171478    4968 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:00:09.172166    4968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-396200 ).networkadapters[0]).ipaddresses[0]
	I0416 19:00:11.614848    4968 main.go:141] libmachine: [stdout =====>] : 172.19.86.30
	
	I0416 19:00:11.614848    4968 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:00:11.620864    4968 main.go:141] libmachine: Using SSH client type: native
	I0416 19:00:11.620864    4968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xaea1c0] 0xaecda0 <nil>  [] 0s} 172.19.86.30 22 <nil> <nil>}
	I0416 19:00:11.620864    4968 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-396200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-396200/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-396200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 19:00:11.782952    4968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 19:00:11.782952    4968 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0416 19:00:11.782952    4968 buildroot.go:174] setting up certificates
	I0416 19:00:11.782952    4968 provision.go:84] configureAuth start
	I0416 19:00:11.782952    4968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-396200 ).state
	I0416 19:00:13.844496    4968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 19:00:13.844496    4968 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:00:13.844496    4968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-396200 ).networkadapters[0]).ipaddresses[0]
	I0416 19:00:16.221662    4968 main.go:141] libmachine: [stdout =====>] : 172.19.86.30
	
	I0416 19:00:16.221662    4968 main.go:141] libmachine: [stderr =====>] : 
	I0416 19:00:16.221662    4968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-396200 ).state
	
	
	==> Docker <==
	Apr 16 19:00:09 pause-334400 cri-dockerd[1239]: time="2024-04-16T19:00:09Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fb1de6e7c040b967267f2fc48d4804c780ba0e9dabf724bda72fab5dd6f7aca3'"
	Apr 16 19:00:10 pause-334400 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Apr 16 19:00:10 pause-334400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 19:00:10 pause-334400 systemd[1]: Starting Docker Application Container Engine...
	Apr 16 19:00:10 pause-334400 dockerd[4359]: time="2024-04-16T19:00:10.098550950Z" level=info msg="Starting up"
	Apr 16 19:01:10 pause-334400 dockerd[4359]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 16 19:01:10 pause-334400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 16 19:01:10 pause-334400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 16 19:01:10 pause-334400 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 16 19:01:10 pause-334400 cri-dockerd[1239]: time="2024-04-16T19:01:10Z" level=error msg="error getting RW layer size for container ID 'fb1de6e7c040b967267f2fc48d4804c780ba0e9dabf724bda72fab5dd6f7aca3': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/fb1de6e7c040b967267f2fc48d4804c780ba0e9dabf724bda72fab5dd6f7aca3/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 16 19:01:10 pause-334400 cri-dockerd[1239]: time="2024-04-16T19:01:10Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fb1de6e7c040b967267f2fc48d4804c780ba0e9dabf724bda72fab5dd6f7aca3'"
	Apr 16 19:01:10 pause-334400 cri-dockerd[1239]: time="2024-04-16T19:01:10Z" level=error msg="error getting RW layer size for container ID '67fe5eaae833967886117e3cb8f13b0c20faf89273fdf02fa26c3a1135ab2976': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/67fe5eaae833967886117e3cb8f13b0c20faf89273fdf02fa26c3a1135ab2976/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 16 19:01:10 pause-334400 cri-dockerd[1239]: time="2024-04-16T19:01:10Z" level=error msg="Set backoffDuration to : 1m0s for container ID '67fe5eaae833967886117e3cb8f13b0c20faf89273fdf02fa26c3a1135ab2976'"
	Apr 16 19:01:10 pause-334400 cri-dockerd[1239]: time="2024-04-16T19:01:10Z" level=error msg="error getting RW layer size for container ID '1cac56013cab1560f2c8f00ad8845165e216a0fd5bbd8a774dd14ba8a42160a8': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/1cac56013cab1560f2c8f00ad8845165e216a0fd5bbd8a774dd14ba8a42160a8/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 16 19:01:10 pause-334400 cri-dockerd[1239]: time="2024-04-16T19:01:10Z" level=error msg="error getting RW layer size for container ID 'e8005ff3c4657cb34819165ecb48cb9019b29277906428f55f19ec1d6a57ff43': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/e8005ff3c4657cb34819165ecb48cb9019b29277906428f55f19ec1d6a57ff43/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 16 19:01:10 pause-334400 cri-dockerd[1239]: time="2024-04-16T19:01:10Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'e8005ff3c4657cb34819165ecb48cb9019b29277906428f55f19ec1d6a57ff43'"
	Apr 16 19:01:10 pause-334400 cri-dockerd[1239]: time="2024-04-16T19:01:10Z" level=error msg="Set backoffDuration to : 1m0s for container ID '1cac56013cab1560f2c8f00ad8845165e216a0fd5bbd8a774dd14ba8a42160a8'"
	Apr 16 19:01:10 pause-334400 cri-dockerd[1239]: time="2024-04-16T19:01:10Z" level=error msg="error getting RW layer size for container ID '56bf229084a1798cc6506e97a2e264b00084e846bb6db8afc46edcfdf5762123': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/56bf229084a1798cc6506e97a2e264b00084e846bb6db8afc46edcfdf5762123/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 16 19:01:10 pause-334400 cri-dockerd[1239]: time="2024-04-16T19:01:10Z" level=error msg="error getting RW layer size for container ID '9ee1d12a013f97405b30b2392817c7dcc4a9bb16781d51649279c155d91f9f60': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/9ee1d12a013f97405b30b2392817c7dcc4a9bb16781d51649279c155d91f9f60/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 16 19:01:10 pause-334400 cri-dockerd[1239]: time="2024-04-16T19:01:10Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9ee1d12a013f97405b30b2392817c7dcc4a9bb16781d51649279c155d91f9f60'"
	Apr 16 19:01:10 pause-334400 cri-dockerd[1239]: time="2024-04-16T19:01:10Z" level=error msg="Set backoffDuration to : 1m0s for container ID '56bf229084a1798cc6506e97a2e264b00084e846bb6db8afc46edcfdf5762123'"
	Apr 16 19:01:10 pause-334400 cri-dockerd[1239]: time="2024-04-16T19:01:10Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Apr 16 19:01:10 pause-334400 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Apr 16 19:01:10 pause-334400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 16 19:01:10 pause-334400 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-16T19:01:12Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr16 18:54] systemd-fstab-generator[954]: Ignoring "noauto" option for root device
	[  +0.093752] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.514450] systemd-fstab-generator[995]: Ignoring "noauto" option for root device
	[  +0.192774] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[  +0.221262] systemd-fstab-generator[1021]: Ignoring "noauto" option for root device
	[  +2.765290] systemd-fstab-generator[1192]: Ignoring "noauto" option for root device
	[  +0.211405] systemd-fstab-generator[1204]: Ignoring "noauto" option for root device
	[  +0.176783] systemd-fstab-generator[1216]: Ignoring "noauto" option for root device
	[  +0.282452] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	[ +11.110418] systemd-fstab-generator[1327]: Ignoring "noauto" option for root device
	[  +0.106132] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.064779] systemd-fstab-generator[1525]: Ignoring "noauto" option for root device
	[  +7.025334] systemd-fstab-generator[1725]: Ignoring "noauto" option for root device
	[  +0.094269] kauditd_printk_skb: 73 callbacks suppressed
	[Apr16 18:55] systemd-fstab-generator[2126]: Ignoring "noauto" option for root device
	[  +0.119803] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.485651] systemd-fstab-generator[2360]: Ignoring "noauto" option for root device
	[  +0.184351] kauditd_printk_skb: 12 callbacks suppressed
	[Apr16 18:58] hrtimer: interrupt took 2165786 ns
	[  +9.330340] systemd-fstab-generator[3717]: Ignoring "noauto" option for root device
	[  +0.175691] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.500462] systemd-fstab-generator[3755]: Ignoring "noauto" option for root device
	[  +0.257454] systemd-fstab-generator[3767]: Ignoring "noauto" option for root device
	[  +0.295031] systemd-fstab-generator[3796]: Ignoring "noauto" option for root device
	[Apr16 18:59] kauditd_printk_skb: 87 callbacks suppressed
	
	
	==> kernel <==
	 19:02:10 up 9 min,  0 users,  load average: 0.09, 0.33, 0.19
	Linux pause-334400 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 16 19:02:02 pause-334400 kubelet[2133]: E0416 19:02:02.573278    2133 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events\": dial tcp 172.19.87.234:8443: connect: connection refused" event="&Event{ObjectMeta:{kube-controller-manager-pause-334400.17c6d7c2bbb1b34e  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-controller-manager-pause-334400,UID:8f4d6b1d3b152fb5d55e674f464f445f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused,Source:EventSource{Component:kubelet,Host:pause-334400,},FirstTimestamp:2024-04-16 18:58:59.633664846 +0000 UTC m=+237.296584730,LastTimestamp:2024-04-16 18:58:59.633664846 +0000 UTC m=+237.2
96584730,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:pause-334400,}"
	Apr 16 19:02:02 pause-334400 kubelet[2133]: I0416 19:02:02.652997    2133 status_manager.go:853] "Failed to get status for pod" podUID="8a1f7848254a3bdba32b4621cc47826d" pod="kube-system/kube-apiserver-pause-334400" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-334400\": dial tcp 172.19.87.234:8443: connect: connection refused"
	Apr 16 19:02:02 pause-334400 kubelet[2133]: E0416 19:02:02.690078    2133 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 19:02:02 pause-334400 kubelet[2133]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 19:02:02 pause-334400 kubelet[2133]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 19:02:02 pause-334400 kubelet[2133]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 19:02:02 pause-334400 kubelet[2133]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 19:02:06 pause-334400 kubelet[2133]: E0416 19:02:06.247660    2133 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 3m8.264493199s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Apr 16 19:02:08 pause-334400 kubelet[2133]: E0416 19:02:08.364131    2133 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-334400?timeout=10s\": dial tcp 172.19.87.234:8443: connect: connection refused" interval="7s"
	Apr 16 19:02:10 pause-334400 kubelet[2133]: E0416 19:02:10.423008    2133 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 16 19:02:10 pause-334400 kubelet[2133]: E0416 19:02:10.423054    2133 kuberuntime_image.go:105] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 16 19:02:10 pause-334400 kubelet[2133]: I0416 19:02:10.423071    2133 image_gc_manager.go:215] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 16 19:02:10 pause-334400 kubelet[2133]: E0416 19:02:10.423220    2133 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 16 19:02:10 pause-334400 kubelet[2133]: E0416 19:02:10.423246    2133 container_log_manager.go:185] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 16 19:02:10 pause-334400 kubelet[2133]: E0416 19:02:10.423285    2133 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 16 19:02:10 pause-334400 kubelet[2133]: E0416 19:02:10.423305    2133 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 16 19:02:10 pause-334400 kubelet[2133]: E0416 19:02:10.423346    2133 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 16 19:02:10 pause-334400 kubelet[2133]: E0416 19:02:10.423367    2133 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 16 19:02:10 pause-334400 kubelet[2133]: E0416 19:02:10.423386    2133 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 16 19:02:10 pause-334400 kubelet[2133]: E0416 19:02:10.423460    2133 kubelet.go:2902] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 16 19:02:10 pause-334400 kubelet[2133]: E0416 19:02:10.423507    2133 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 16 19:02:10 pause-334400 kubelet[2133]: E0416 19:02:10.423535    2133 kuberuntime_container.go:494] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 16 19:02:10 pause-334400 kubelet[2133]: E0416 19:02:10.424379    2133 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 16 19:02:10 pause-334400 kubelet[2133]: E0416 19:02:10.424408    2133 kuberuntime_container.go:494] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 16 19:02:10 pause-334400 kubelet[2133]: E0416 19:02:10.424690    2133 kubelet.go:1433] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 19:00:21.426160   13932 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0416 19:01:09.964689   13932 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0416 19:01:09.995696   13932 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0416 19:01:10.026698   13932 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0416 19:01:10.053251   13932 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0416 19:01:10.080738   13932 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0416 19:01:10.108739   13932 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0416 19:01:10.136098   13932 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-334400 -n pause-334400
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-334400 -n pause-334400: exit status 2 (11.2545038s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 19:02:10.691383   14332 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-334400" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (421.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10800.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-889000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperv --kubernetes-version=v1.30.0-rc.2
panic: test timed out after 3h0m0s
running tests:
	TestNetworkPlugins (17m12s)
	TestStartStop (32m31s)
	TestStartStop/group/embed-certs (10m59s)
	TestStartStop/group/embed-certs/serial (10m59s)
	TestStartStop/group/embed-certs/serial/SecondStart (2m33s)
	TestStartStop/group/newest-cni (9m55s)
	TestStartStop/group/newest-cni/serial (9m55s)
	TestStartStop/group/newest-cni/serial/SecondStart (1m52s)
	TestStartStop/group/no-preload (13m41s)
	TestStartStop/group/no-preload/serial (13m41s)
	TestStartStop/group/no-preload/serial/SecondStart (5m10s)
	TestStartStop/group/old-k8s-version (15m46s)
	TestStartStop/group/old-k8s-version/serial (15m46s)
	TestStartStop/group/old-k8s-version/serial/SecondStart (6m24s)

                                                
                                                
goroutine 1992 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 10 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0005651e0, 0xc00148fbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0007364f8, {0x510f4a0, 0x2a, 0x2a}, {0x2e6bad5?, 0xd781af?, 0x5131ca0?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0009a5860)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0009a5860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 11 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00048cf80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 517 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc001418610, 0x37)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x2929880?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00137ea80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001418680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0014fa410, {0x3dc3ee0, 0xc0023b1c50}, 0x1, 0xc000456000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0014fa410, 0x3b9aca00, 0x0, 0x1, 0xc000456000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 532
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1798 [chan receive, 8 minutes]:
testing.(*T).Run(0xc0022a2d00, {0x2e1dda3?, 0x60400000004?}, 0xc000071f00)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0022a2d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0022a2d00, 0xc000071480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1635
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 29 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 28
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 1991 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc0026a8840, 0xc002b7c960)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 1988
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 1635 [chan receive, 16 minutes]:
testing.(*T).Run(0xc0022a21a0, {0x2e125f7?, 0x0?}, 0xc000071480)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0022a21a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0022a21a0, 0xc0014181c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1634
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1893 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3de6770, 0xc000456000}, 0xc001309f50, 0xc001309f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3de6770, 0xc000456000}, 0xa0?, 0xc001309f50, 0xc001309f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3de6770?, 0xc000456000?}, 0x0?, 0xe07f40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001309fd0?, 0xe4e6e4?, 0xc0026a7f80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1884
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1806 [chan receive, 2 minutes]:
testing.(*T).Run(0xc0022676c0, {0x2e1dda3?, 0x60400000004?}, 0xc000071a80)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0022676c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0022676c0, 0xc0006e8400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1640
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 266 [IO wait, 165 minutes]:
internal/poll.runtime_pollWait(0x206571ace70, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xccfe76?, 0x51bf0e0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc0006587a0, 0xc0013d1bb0)
	/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc000658788, 0x32c, {0xc000978000?, 0x0?, 0x100000000?}, 0xc000581008?)
	/usr/local/go/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc000658788, 0xc0013d1d90)
	/usr/local/go/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc000658788)
	/usr/local/go/src/net/fd_windows.go:178 +0x54
net.(*TCPListener).accept(0xc0004b61e0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0004b61e0)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0009420f0, {0x3dda2f0, 0xc0004b61e0})
	/usr/local/go/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0009420f0)
	/usr/local/go/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc0000ed6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 263
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 1989 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc0021b3b20?, 0xcd7f45?, 0x51bf0e0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x2035?, 0xc0021b3b80?, 0xccfe76?, 0x51bf0e0?, 0xc0021b3c08?, 0xcc28db?, 0x20611af0a28?, 0xc0028fdf35?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x714, {0xc0001225fd?, 0x203, 0xd742bf?}, 0xc0012baa08?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0012baa08?, {0xc0001225fd?, 0xcf5210?, 0x400?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0012baa08, {0xc0001225fd, 0x203, 0x203})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000a74e0, {0xc0001225fd?, 0xc000624e00?, 0x74?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0012e7b90, {0x3dc2aa0, 0xc0004b37d0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3dc2be0, 0xc0012e7b90}, {0x3dc2aa0, 0xc0004b37d0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0021b3e78?, {0x3dc2be0, 0xc0012e7b90})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x50c3ab0?, {0x3dc2be0?, 0xc0012e7b90?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3dc2be0, 0xc0012e7b90}, {0x3dc2b60, 0xc0000a74e0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002b7c7e0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1988
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1638 [chan receive, 14 minutes]:
testing.(*T).Run(0xc0022a2820, {0x2e125f7?, 0x0?}, 0xc00070fb00)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0022a2820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0022a2820, 0xc001418280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1634
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1485 [chan receive, 18 minutes]:
testing.(*T).Run(0xc00137a000, {0x2e110f3?, 0xd2f56d?}, 0xc002cfc1b0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc00137a000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc00137a000, 0x3877cd0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1634 [chan receive, 33 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0022a2000, 0x3877ef0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1469
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1944 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc0026a86e0, 0xc002b7cae0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 1941
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 1676 [chan receive, 18 minutes]:
testing.(*testContext).waitParallel(0xc0007d4730)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0022a36c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0022a36c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0022a36c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0022a36c0, 0xc0006e8080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1675
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1879 [syscall, 8 minutes, locked to thread]:
syscall.SyscallN(0x7ffad8464de0?, {0xc0021b5ab0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x7c8, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc002244ab0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc001409ce0)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc001409ce0)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc000565a00, 0xc001409ce0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x3de65b0, 0xc000410070}, 0xc000565a00, {0xc000901ed8, 0x16}, {0xc001872a48?, 0xc0021b5f60?}, {0xe07613?, 0xd58eaf?}, {0xc00017a0c0, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc000565a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc000565a00, 0xc000071f00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1798
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1640 [chan receive, 12 minutes]:
testing.(*T).Run(0xc0022a2b60, {0x2e125f7?, 0x0?}, 0xc0006e8400)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0022a2b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0022a2b60, 0xc001418340)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1634
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1679 [chan receive, 18 minutes]:
testing.(*testContext).waitParallel(0xc0007d4730)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0022a3ba0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0022a3ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0022a3ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0022a3ba0, 0xc0006e8380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1675
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1942 [syscall, locked to thread]:
syscall.SyscallN(0x0?, {0xc0013edb20?, 0xcd7f45?, 0x51bf0e0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc0000025b0?, 0xc0013edb80?, 0xccfe76?, 0x51bf0e0?, 0xc0013edc08?, 0xcc2a45?, 0x20611af0108?, 0xc00054104d?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x36c, {0xc002316272?, 0x58e, 0xd742bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001382f08?, {0xc002316272?, 0xc0013edd58?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001382f08, {0xc002316272, 0x58e, 0x58e})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000a7550, {0xc002316272?, 0x203?, 0x239?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0012cc600, {0x3dc2aa0, 0xc00029a8f8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3dc2be0, 0xc0012cc600}, {0x3dc2aa0, 0xc00029a8f8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0013ede70?, {0x3dc2be0, 0xc0012cc600})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x50c3ab0?, {0x3dc2be0?, 0xc0012cc600?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3dc2be0, 0xc0012cc600}, {0x3dc2b60, 0xc0000a7550}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002b7c720?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1941
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1884 [chan receive, 8 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002524880, 0xc000456000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1872
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1892 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc002524850, 0x1)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x2929880?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0028fd020)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002524880)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0013ddc90, {0x3dc3ee0, 0xc0012cdd40}, 0x1, 0xc000456000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0013ddc90, 0x3b9aca00, 0x0, 0x1, 0xc000456000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1884
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1827 [chan receive, 2 minutes]:
testing.(*T).Run(0xc0022664e0, {0x2e1dda3?, 0x60400000004?}, 0xc0006e8580)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0022664e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0022664e0, 0xc0022f8080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1636
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1787 [chan receive, 6 minutes]:
testing.(*T).Run(0xc0022a3d40, {0x2e1dda3?, 0x60400000004?}, 0xc0006e8680)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0022a3d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0022a3d40, 0xc00070fb00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1638
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 519 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 518
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 874 [chan send, 148 minutes]:
os/exec.(*Cmd).watchCtx(0xc002ca9a20, 0xc002b7dda0)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 403
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 518 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3de6770, 0xc000456000}, 0xc00229bf50, 0xc00229bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3de6770, 0xc000456000}, 0x10?, 0xc00229bf50, 0xc00229bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3de6770?, 0xc000456000?}, 0xc00029cea0?, 0xe07f40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xe08ea5?, 0xc00029cea0?, 0xc00137ec60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 532
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 531 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00137eba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 416
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 532 [chan receive, 154 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001418680, 0xc000456000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 416
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1845 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3de6770, 0xc000456000}, 0xc0014f7f50, 0xc0014f7f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3de6770, 0xc000456000}, 0x16?, 0xc0014f7f50, 0xc0014f7f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3de6770?, 0xc000456000?}, 0xc002267ba0?, 0xe07f40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xe08ea5?, 0xc002267ba0?, 0xc0006e8000?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1833
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1988 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x7ffad8464de0?, {0xc002335ab0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x6b0, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc002b02180)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0026a8840)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0026a8840)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0022a3380, 0xc0026a8840)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x3de65b0, 0xc000414070}, 0xc0022a3380, {0xc002532000, 0x11}, {0xc03b6eb668?, 0xc002335f60?}, {0xe07613?, 0xd58eaf?}, {0xc0001c4800, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0022a3380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0022a3380, 0xc0006e8580)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1827
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1943 [syscall, locked to thread]:
syscall.SyscallN(0xc0024f9b10?, {0xc0024f9b20?, 0xcd7f45?, 0x513eb60?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x100000000000077?, 0xc0024f9b80?, 0xccfe76?, 0x51bf0e0?, 0xc0024f9c08?, 0xcc2a45?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x758, {0xc0022c65be?, 0x9a42, 0xd742bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001366a08?, {0xc0022c65be?, 0x0?, 0x20000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001366a08, {0xc0022c65be, 0x9a42, 0x9a42})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000a7598, {0xc0022c65be?, 0x0?, 0xff03?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0012cc630, {0x3dc2aa0, 0xc0012bc040})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3dc2be0, 0xc0012cc630}, {0x3dc2aa0, 0xc0012bc040}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3dc2be0, 0xc0012cc630})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x50c3ab0?, {0x3dc2be0?, 0xc0012cc630?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3dc2be0, 0xc0012cc630}, {0x3dc2b60, 0xc0000a7598}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1941
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1917 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc00011c000, 0xc0023400c0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 1914
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 607 [chan send, 154 minutes]:
os/exec.(*Cmd).watchCtx(0xc0026a8b00, 0xc0026a6540)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 606
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 1844 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0012ce350, 0x1)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x2929880?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002486a20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0012ce380)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0026fa300, {0x3dc3ee0, 0xc0008ec330}, 0x1, 0xc000456000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0026fa300, 0x3b9aca00, 0x0, 0x1, 0xc000456000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1833
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1914 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x7ffad8464de0?, {0xc0023c3ab0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x430, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc002c105d0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00011c000)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc00011c000)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc002266b60, 0xc00011c000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x3de65b0, 0xc00047cf50}, 0xc002266b60, {0xc000651488, 0x12}, {0xc0251837a4?, 0xc0023c3f60?}, {0xe07613?, 0xd58eaf?}, {0xc000a04400, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc002266b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc002266b60, 0xc000071a80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1806
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1846 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1845
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1637 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0007d4730)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0022a2680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0022a2680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0022a2680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0022a2680, 0xc001418240)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1634
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1883 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0028fd140)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 1872
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 1636 [chan receive, 10 minutes]:
testing.(*T).Run(0xc0022a24e0, {0x2e125f7?, 0x0?}, 0xc0022f8080)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0022a24e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0022a24e0, 0xc001418200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1634
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1678 [chan receive, 18 minutes]:
testing.(*testContext).waitParallel(0xc0007d4730)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0022a3a00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0022a3a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0022a3a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0022a3a00, 0xc0006e8280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1675
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1677 [chan receive, 18 minutes]:
testing.(*testContext).waitParallel(0xc0007d4730)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0022a3860)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0022a3860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0022a3860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0022a3860, 0xc0006e8180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1675
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1880 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc0014e9b20?, 0x0?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x0?, 0x0?, 0x0?, 0x0?, 0xc0014e9c08?, 0xcc28db?, 0x400?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x440, {0xc0023172eb?, 0x515, 0xd742bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0022fd188?, {0xc0023172eb?, 0x0?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0022fd188, {0xc0023172eb, 0x515, 0x515})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0007ea1b8, {0xc0023172eb?, 0x206570c5c78?, 0x213?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0023b12c0, {0x3dc2aa0, 0xc0004b37f8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3dc2be0, 0xc0023b12c0}, {0x3dc2aa0, 0xc0004b37f8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0014e9e78?, {0x3dc2be0, 0xc0023b12c0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x50c3ab0?, {0x3dc2be0?, 0xc0023b12c0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3dc2be0, 0xc0023b12c0}, {0x3dc2b60, 0xc0007ea1b8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002b7d7a0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1879
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1916 [syscall, locked to thread]:
syscall.SyscallN(0xc00223bb10?, {0xc00223bb20?, 0xcd7f45?, 0x513eb60?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x10000000000006d?, 0xc00223bb80?, 0xccfe76?, 0x51bf0e0?, 0xc00223bc08?, 0xcc2a45?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6a0, {0xc0012b4f1b?, 0x30e5, 0xd742bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc002adcc88?, {0xc0012b4f1b?, 0xcfc25e?, 0x8000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002adcc88, {0xc0012b4f1b, 0x30e5, 0x30e5})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0004b3790, {0xc0012b4f1b?, 0xc0030096c0?, 0x3e33?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002950300, {0x3dc2aa0, 0xc0000a7428})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3dc2be0, 0xc002950300}, {0x3dc2aa0, 0xc0000a7428}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00223be78?, {0x3dc2be0, 0xc002950300})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x50c3ab0?, {0x3dc2be0?, 0xc002950300?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3dc2be0, 0xc002950300}, {0x3dc2b60, 0xc0004b3790}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002b7dc20?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1914
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1675 [chan receive, 18 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0022a2ea0, 0xc002cfc1b0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1485
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1469 [chan receive, 33 minutes]:
testing.(*T).Run(0xc00137a9c0, {0x2e110f3?, 0xe07613?}, 0x3877ef0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc00137a9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc00137a9c0, 0x3877d18)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1882 [select, 8 minutes]:
os/exec.(*Cmd).watchCtx(0xc001409ce0, 0xc000a0c120)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 1879
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 1915 [syscall, locked to thread]:
syscall.SyscallN(0xc001491b10?, {0xc001491b20?, 0xcd7f45?, 0x51bf0e0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x6d?, 0xc001491b80?, 0xccfe76?, 0x51bf0e0?, 0xc001491c08?, 0xcc2a45?, 0x20611af0108?, 0x8004d?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6f0, {0xc0007bfa3d?, 0x5c3, 0xc0007bf800?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc002adc788?, {0xc0007bfa3d?, 0xcfc25e?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002adc788, {0xc0007bfa3d, 0x5c3, 0x5c3})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0004b3770, {0xc0007bfa3d?, 0xc001491d98?, 0x23d?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0029502d0, {0x3dc2aa0, 0xc00029ada8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3dc2be0, 0xc0029502d0}, {0x3dc2aa0, 0xc00029ada8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3dc2be0, 0xc0029502d0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x50c3ab0?, {0x3dc2be0?, 0xc0029502d0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3dc2be0, 0xc0029502d0}, {0x3dc2b60, 0xc0004b3770}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0023407e0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1914
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1990 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0xc00232b000?, {0xc0013efb20?, 0xcd7f45?, 0x51bf0e0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x50df477?, 0xc0013efb80?, 0xccfe76?, 0x51bf0e0?, 0xc0013efc08?, 0xcc2a45?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x40c, {0xc0014b2517?, 0x1ae9, 0xd742bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0012bbb88?, {0xc0014b2517?, 0xcfc25e?, 0x4000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0012bbb88, {0xc0014b2517, 0x1ae9, 0x1ae9})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000a75a8, {0xc0014b2517?, 0x206570c44b8?, 0x1fa4?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0012e7bc0, {0x3dc2aa0, 0xc00029b200})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3dc2be0, 0xc0012e7bc0}, {0x3dc2aa0, 0xc00029b200}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3dc2be0, 0xc0012e7bc0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x50c3ab0?, {0x3dc2be0?, 0xc0012e7bc0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3dc2be0, 0xc0012e7bc0}, {0x3dc2b60, 0xc0000a75a8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002524080?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1988
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1833 [chan receive, 8 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0012ce380, 0xc000456000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1792
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1832 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002486b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 1792
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 1696 [chan receive, 18 minutes]:
testing.(*testContext).waitParallel(0xc0007d4730)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0000ec680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0000ec680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0000ec680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0000ec680, 0xc00048c480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1675
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1697 [chan receive, 18 minutes]:
testing.(*testContext).waitParallel(0xc0007d4730)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0000eda00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0000eda00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0000eda00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0000eda00, 0xc00048c580)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1675
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1698 [chan receive, 18 minutes]:
testing.(*testContext).waitParallel(0xc0007d4730)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002266680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002266680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002266680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc002266680, 0xc00048c680)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1675
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1699 [chan receive, 18 minutes]:
testing.(*testContext).waitParallel(0xc0007d4730)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002266820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002266820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002266820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc002266820, 0xc00048c700)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1675
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1700 [chan receive, 18 minutes]:
testing.(*testContext).waitParallel(0xc0007d4730)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0022669c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0022669c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0022669c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0022669c0, 0xc00048c780)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1675
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1881 [syscall, locked to thread]:
syscall.SyscallN(0xc0014ffdc0?, {0xc00219fb20?, 0x0?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x0?, 0x0?, 0x3e2d20322e333a65?, 0x726573555c3a4320?, 0x6e696b6e656a5c73?, 0x756b696e696d2e73?, 0x696e696d5c356562?, 0x746e692d6562756b?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x330, {0xc0021d777d?, 0x2883, 0xd742bf?}, 0x1b7ebaecd5e98b5e?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc002302288?, {0xc0021d777d?, 0xdcc8?, 0xdcc8?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002302288, {0xc0021d777d, 0x2883, 0x2883})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0007ea1e0, {0xc0021d777d?, 0x206570c44b8?, 0xff04?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0023b12f0, {0x3dc2aa0, 0xc00029ae18})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3dc2be0, 0xc0023b12f0}, {0x3dc2aa0, 0xc00029ae18}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3dc2be0, 0xc0023b12f0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x50c3ab0?, {0x3dc2be0?, 0xc0023b12f0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3dc2be0, 0xc0023b12f0}, {0x3dc2b60, 0xc0007ea1e0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0022ef0e0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1879
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1894 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1893
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1941 [syscall, 6 minutes, locked to thread]:
syscall.SyscallN(0x7ffad8464de0?, {0xc0024ffab0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x458, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc002ef3a10)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0026a86e0)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0026a86e0)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc00137ab60, 0xc0026a86e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x3de65b0, 0xc00047ce70}, 0xc00137ab60, {0xc002e78480, 0x11}, {0xc02623e634?, 0xc0024fff60?}, {0xe07613?, 0xd58eaf?}, {0xc00049fc00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00137ab60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00137ab60, 0xc0006e8680)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1787
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                    

Test pass (135/195)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 16.38
4 TestDownloadOnly/v1.20.0/preload-exists 0.06
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.24
9 TestDownloadOnly/v1.20.0/DeleteAll 1.05
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.03
12 TestDownloadOnly/v1.29.3/json-events 9.88
13 TestDownloadOnly/v1.29.3/preload-exists 0
16 TestDownloadOnly/v1.29.3/kubectl 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.2
18 TestDownloadOnly/v1.29.3/DeleteAll 1.02
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.98
21 TestDownloadOnly/v1.30.0-rc.2/json-events 9.76
22 TestDownloadOnly/v1.30.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.30.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.30.0-rc.2/LogsDuration 0.19
27 TestDownloadOnly/v1.30.0-rc.2/DeleteAll 0.96
28 TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds 0.97
30 TestBinaryMirror 6.22
31 TestOffline 279.25
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.25
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.24
37 TestCertOptions 436.16
38 TestCertExpiration 715.55
39 TestDockerFlags 338.61
40 TestForceSystemdFlag 489.41
48 TestErrorSpam/start 15.15
49 TestErrorSpam/status 33.4
50 TestErrorSpam/pause 23.87
51 TestErrorSpam/unpause 168.7
52 TestErrorSpam/stop 87.63
55 TestFunctional/serial/CopySyncFile 0.03
56 TestFunctional/serial/StartWithProxy 189.94
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 117.69
59 TestFunctional/serial/KubeContext 0.11
60 TestFunctional/serial/KubectlGetPods 0.2
63 TestFunctional/serial/CacheCmd/cache/add_remote 23.68
64 TestFunctional/serial/CacheCmd/cache/add_local 9.42
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.2
66 TestFunctional/serial/CacheCmd/cache/list 0.21
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 8.45
68 TestFunctional/serial/CacheCmd/cache/cache_reload 32.57
69 TestFunctional/serial/CacheCmd/cache/delete 0.45
70 TestFunctional/serial/MinikubeKubectlCmd 0.4
72 TestFunctional/serial/ExtraConfig 143.76
73 TestFunctional/serial/ComponentHealth 0.16
74 TestFunctional/serial/LogsCmd 7.44
75 TestFunctional/serial/LogsFileCmd 9.34
76 TestFunctional/serial/InvalidService 19.67
82 TestFunctional/parallel/StatusCmd 37.62
86 TestFunctional/parallel/ServiceCmdConnect 24.05
87 TestFunctional/parallel/AddonsCmd 0.69
88 TestFunctional/parallel/PersistentVolumeClaim 36.96
90 TestFunctional/parallel/SSHCmd 17.9
91 TestFunctional/parallel/CpCmd 56.69
92 TestFunctional/parallel/MySQL 50.99
93 TestFunctional/parallel/FileSync 9.84
94 TestFunctional/parallel/CertSync 61.41
98 TestFunctional/parallel/NodeLabels 0.19
100 TestFunctional/parallel/NonActiveRuntimeDisabled 11.06
102 TestFunctional/parallel/License 2.64
103 TestFunctional/parallel/Version/short 0.27
104 TestFunctional/parallel/Version/components 7.43
105 TestFunctional/parallel/ImageCommands/ImageListShort 7.35
106 TestFunctional/parallel/ImageCommands/ImageListTable 7
107 TestFunctional/parallel/ImageCommands/ImageListJson 7.39
108 TestFunctional/parallel/ImageCommands/ImageListYaml 7.09
109 TestFunctional/parallel/ImageCommands/ImageBuild 23.62
110 TestFunctional/parallel/ImageCommands/Setup 4.12
111 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 22.54
112 TestFunctional/parallel/DockerEnv/powershell 43.45
113 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 19.63
114 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 25.85
115 TestFunctional/parallel/UpdateContextCmd/no_changes 2.26
116 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.29
117 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.3
118 TestFunctional/parallel/ServiceCmd/DeployApp 46.5
119 TestFunctional/parallel/ImageCommands/ImageSaveToFile 9.94
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 7.72
122 TestFunctional/parallel/ImageCommands/ImageRemove 14.62
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 20.55
126 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 15.29
127 TestFunctional/parallel/ServiceCmd/List 11.98
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 10.07
135 TestFunctional/parallel/ServiceCmd/JSONOutput 12.37
137 TestFunctional/parallel/ProfileCmd/profile_not_create 9.86
138 TestFunctional/parallel/ProfileCmd/profile_list 10.8
140 TestFunctional/parallel/ProfileCmd/profile_json_output 9.87
142 TestFunctional/delete_addon-resizer_images 0.39
143 TestFunctional/delete_my-image_image 0.16
144 TestFunctional/delete_minikube_cached_images 0.17
152 TestMultiControlPlane/serial/NodeLabels 0.16
163 TestJSONOutput/start/Command 191.33
164 TestJSONOutput/start/Audit 0
166 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/pause/Command 7.11
170 TestJSONOutput/pause/Audit 0
172 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/unpause/Command 6.96
176 TestJSONOutput/unpause/Audit 0
178 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/stop/Command 37.52
182 TestJSONOutput/stop/Audit 0
184 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
186 TestErrorJSONOutput 1.25
191 TestMainNoArgs 0.2
192 TestMinikubeProfile 479.74
195 TestMountStart/serial/StartWithMountFirst 141.73
196 TestMountStart/serial/VerifyMountFirst 8.72
197 TestMountStart/serial/StartWithMountSecond 141.84
198 TestMountStart/serial/VerifyMountSecond 8.69
199 TestMountStart/serial/DeleteFirst 25.32
200 TestMountStart/serial/VerifyMountPostDelete 8.92
201 TestMountStart/serial/Stop 27.82
205 TestMultiNode/serial/FreshStart2Nodes 386.05
206 TestMultiNode/serial/DeployApp2Nodes 7.87
209 TestMultiNode/serial/MultiNodeLabels 0.15
210 TestMultiNode/serial/ProfileList 8.66
212 TestMultiNode/serial/StopNode 73.31
221 TestPreload 493.71
222 TestScheduledStopWindows 307.46
227 TestRunningBinaryUpgrade 882.62
229 TestKubernetesUpgrade 1156.34
232 TestNoKubernetes/serial/StartNoK8sWithVersion 0.28
234 TestStoppedBinaryUpgrade/Setup 0.66
235 TestStoppedBinaryUpgrade/Upgrade 720.52
244 TestPause/serial/Start 421.62
246 TestStoppedBinaryUpgrade/MinikubeLogs 8.99
x
+
TestDownloadOnly/v1.20.0/json-events (16.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-356200 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-356200 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (16.3743829s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (16.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-356200
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-356200: exit status 85 (235.5819ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-356200 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:20 UTC |          |
	|         | -p download-only-356200        |                      |                   |                |                     |          |
	|         | --force --alsologtostderr      |                      |                   |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |                |                     |          |
	|         | --container-runtime=docker     |                      |                   |                |                     |          |
	|         | --driver=hyperv                |                      |                   |                |                     |          |
	|---------|--------------------------------|----------------------|-------------------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 16:20:50
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 16:20:50.219186   13404 out.go:291] Setting OutFile to fd 620 ...
	I0416 16:20:50.220192   13404 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:20:50.220192   13404 out.go:304] Setting ErrFile to fd 624...
	I0416 16:20:50.220192   13404 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0416 16:20:50.232188   13404 root.go:314] Error reading config file at C:\Users\jenkins.minikube5\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube5\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0416 16:20:50.240188   13404 out.go:298] Setting JSON to true
	I0416 16:20:50.243193   13404 start.go:129] hostinfo: {"hostname":"minikube5","uptime":22080,"bootTime":1713262370,"procs":201,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0416 16:20:50.243193   13404 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0416 16:20:50.245186   13404 out.go:97] [download-only-356200] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0416 16:20:50.245186   13404 notify.go:220] Checking for updates...
	W0416 16:20:50.245186   13404 preload.go:294] Failed to list preload files: open C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0416 16:20:50.246187   13404 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:20:50.246187   13404 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0416 16:20:50.247186   13404 out.go:169] MINIKUBE_LOCATION=18649
	I0416 16:20:50.248193   13404 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0416 16:20:50.249187   13404 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0416 16:20:50.249187   13404 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 16:20:55.131699   13404 out.go:97] Using the hyperv driver based on user configuration
	I0416 16:20:55.131699   13404 start.go:297] selected driver: hyperv
	I0416 16:20:55.131699   13404 start.go:901] validating driver "hyperv" against <nil>
	I0416 16:20:55.132326   13404 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 16:20:55.181835   13404 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0416 16:20:55.183084   13404 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0416 16:20:55.183084   13404 cni.go:84] Creating CNI manager for ""
	I0416 16:20:55.183084   13404 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0416 16:20:55.183084   13404 start.go:340] cluster config:
	{Name:download-only-356200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-356200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:20:55.185356   13404 iso.go:125] acquiring lock: {Name:mka3f8eef32f5becd06d05d1d837c2a92a8fa70c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 16:20:55.186462   13404 out.go:97] Downloading VM boot image ...
	I0416 16:20:55.186997   13404 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.33.0-1713236417-18649-amd64.iso
	I0416 16:21:00.287966   13404 out.go:97] Starting "download-only-356200" primary control-plane node in "download-only-356200" cluster
	I0416 16:21:00.287966   13404 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0416 16:21:00.328568   13404 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0416 16:21:00.328568   13404 cache.go:56] Caching tarball of preloaded images
	I0416 16:21:00.329105   13404 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0416 16:21:00.330161   13404 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0416 16:21:00.330161   13404 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0416 16:21:00.392961   13404 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0416 16:21:03.682938   13404 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0416 16:21:03.683514   13404 preload.go:255] verifying checksum of C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0416 16:21:04.653508   13404 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0416 16:21:04.654567   13404 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\download-only-356200\config.json ...
	I0416 16:21:04.655059   13404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\download-only-356200\config.json: {Name:mk32462a023f104f063b64ebb04f39355b1a14d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:21:04.655325   13404 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0416 16:21:04.657322   13404 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	
	
	* The control-plane node download-only-356200 host does not exist
	  To start a cluster, run: "minikube start -p download-only-356200"

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 16:21:06.619162    3636 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.048176s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-356200
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-356200: (1.0325521s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (9.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-475300 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-475300 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=hyperv: (9.8749448s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (9.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
--- PASS: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-475300
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-475300: exit status 85 (203.1766ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-356200 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:20 UTC |                     |
	|         | -p download-only-356200        |                      |                   |                |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |                |                     |                     |
	|         | --container-runtime=docker     |                      |                   |                |                     |                     |
	|         | --driver=hyperv                |                      |                   |                |                     |                     |
	| delete  | --all                          | minikube             | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:21 UTC | 16 Apr 24 16:21 UTC |
	| delete  | -p download-only-356200        | download-only-356200 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:21 UTC | 16 Apr 24 16:21 UTC |
	| start   | -o=json --download-only        | download-only-475300 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:21 UTC |                     |
	|         | -p download-only-475300        |                      |                   |                |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |                   |                |                     |                     |
	|         | --container-runtime=docker     |                      |                   |                |                     |                     |
	|         | --driver=hyperv                |                      |                   |                |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 16:21:08
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 16:21:08.979995    4900 out.go:291] Setting OutFile to fd 808 ...
	I0416 16:21:08.980165    4900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:21:08.980165    4900 out.go:304] Setting ErrFile to fd 812...
	I0416 16:21:08.980165    4900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:21:09.005752    4900 out.go:298] Setting JSON to true
	I0416 16:21:09.008212    4900 start.go:129] hostinfo: {"hostname":"minikube5","uptime":22098,"bootTime":1713262370,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0416 16:21:09.008212    4900 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0416 16:21:09.009379    4900 out.go:97] [download-only-475300] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0416 16:21:09.009379    4900 notify.go:220] Checking for updates...
	I0416 16:21:09.010088    4900 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:21:09.010376    4900 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0416 16:21:09.010978    4900 out.go:169] MINIKUBE_LOCATION=18649
	I0416 16:21:09.011583    4900 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0416 16:21:09.012035    4900 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0416 16:21:09.013437    4900 driver.go:392] Setting default libvirt URI to qemu:///system
	
	
	* The control-plane node download-only-475300 host does not exist
	  To start a cluster, run: "minikube start -p download-only-475300"

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 16:21:18.804945    8892 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (1.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.0174093s)
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (1.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-475300
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/json-events (9.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-222900 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.2 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-222900 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.2 --container-runtime=docker --driver=hyperv: (9.761869s)
--- PASS: TestDownloadOnly/v1.30.0-rc.2/json-events (9.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.30.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/LogsDuration (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-222900
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-222900: exit status 85 (191.1126ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| Command |               Args                |       Profile        |       User        |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-356200 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:20 UTC |                     |
	|         | -p download-only-356200           |                      |                   |                |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |                |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |                   |                |                     |                     |
	|         | --container-runtime=docker        |                      |                   |                |                     |                     |
	|         | --driver=hyperv                   |                      |                   |                |                     |                     |
	| delete  | --all                             | minikube             | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:21 UTC | 16 Apr 24 16:21 UTC |
	| delete  | -p download-only-356200           | download-only-356200 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:21 UTC | 16 Apr 24 16:21 UTC |
	| start   | -o=json --download-only           | download-only-475300 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:21 UTC |                     |
	|         | -p download-only-475300           |                      |                   |                |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |                |                     |                     |
	|         | --kubernetes-version=v1.29.3      |                      |                   |                |                     |                     |
	|         | --container-runtime=docker        |                      |                   |                |                     |                     |
	|         | --driver=hyperv                   |                      |                   |                |                     |                     |
	| delete  | --all                             | minikube             | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:21 UTC | 16 Apr 24 16:21 UTC |
	| delete  | -p download-only-475300           | download-only-475300 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:21 UTC | 16 Apr 24 16:21 UTC |
	| start   | -o=json --download-only           | download-only-222900 | minikube5\jenkins | v1.33.0-beta.0 | 16 Apr 24 16:21 UTC |                     |
	|         | -p download-only-222900           |                      |                   |                |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2 |                      |                   |                |                     |                     |
	|         | --container-runtime=docker        |                      |                   |                |                     |                     |
	|         | --driver=hyperv                   |                      |                   |                |                     |                     |
	|---------|-----------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 16:21:21
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 16:21:21.049328    2572 out.go:291] Setting OutFile to fd 724 ...
	I0416 16:21:21.049663    2572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:21:21.049663    2572 out.go:304] Setting ErrFile to fd 804...
	I0416 16:21:21.049663    2572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:21:21.067938    2572 out.go:298] Setting JSON to true
	I0416 16:21:21.070052    2572 start.go:129] hostinfo: {"hostname":"minikube5","uptime":22111,"bootTime":1713262370,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0416 16:21:21.071010    2572 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0416 16:21:21.072317    2572 out.go:97] [download-only-222900] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0416 16:21:21.072481    2572 notify.go:220] Checking for updates...
	I0416 16:21:21.072708    2572 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:21:21.073715    2572 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0416 16:21:21.074624    2572 out.go:169] MINIKUBE_LOCATION=18649
	I0416 16:21:21.075444    2572 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0416 16:21:21.076488    2572 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0416 16:21:21.077990    2572 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 16:21:25.759248    2572 out.go:97] Using the hyperv driver based on user configuration
	I0416 16:21:25.759334    2572 start.go:297] selected driver: hyperv
	I0416 16:21:25.759334    2572 start.go:901] validating driver "hyperv" against <nil>
	I0416 16:21:25.759460    2572 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 16:21:25.801269    2572 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0416 16:21:25.802279    2572 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0416 16:21:25.802341    2572 cni.go:84] Creating CNI manager for ""
	I0416 16:21:25.802341    2572 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0416 16:21:25.802491    2572 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0416 16:21:25.802703    2572 start.go:340] cluster config:
	{Name:download-only-222900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:download-only-222900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterva
l:1m0s}
	I0416 16:21:25.802703    2572 iso.go:125] acquiring lock: {Name:mka3f8eef32f5becd06d05d1d837c2a92a8fa70c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 16:21:25.804043    2572 out.go:97] Starting "download-only-222900" primary control-plane node in "download-only-222900" cluster
	I0416 16:21:25.804097    2572 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime docker
	I0416 16:21:25.840238    2572 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.2/preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0416 16:21:25.840238    2572 cache.go:56] Caching tarball of preloaded images
	I0416 16:21:25.841422    2572 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime docker
	I0416 16:21:25.842083    2572 out.go:97] Downloading Kubernetes v1.30.0-rc.2 preload ...
	I0416 16:21:25.842083    2572 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0416 16:21:25.906877    2572 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.2/preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:9834337eee074d8b5e25932a2917a549 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-222900 host does not exist
	  To start a cluster, run: "minikube start -p download-only-222900"

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 16:21:30.763786    6360 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-rc.2/LogsDuration (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/DeleteAll (0.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.30.0-rc.2/DeleteAll (0.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds (0.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-222900
--- PASS: TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds (0.97s)

                                                
                                    
x
+
TestBinaryMirror (6.22s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-645500 --alsologtostderr --binary-mirror http://127.0.0.1:52794 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-645500 --alsologtostderr --binary-mirror http://127.0.0.1:52794 --driver=hyperv: (5.472546s)
helpers_test.go:175: Cleaning up "binary-mirror-645500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-645500
--- PASS: TestBinaryMirror (6.22s)

                                                
                                    
x
+
TestOffline (279.25s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-833900 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-833900 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (3m47.4918073s)
helpers_test.go:175: Cleaning up "offline-docker-833900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-833900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-833900: (51.7619459s)
--- PASS: TestOffline (279.25s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.25s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-257600
addons_test.go:928: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-257600: exit status 85 (245.96ms)

                                                
                                                
-- stdout --
	* Profile "addons-257600" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-257600"

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 16:21:42.141118    7252 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.25s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.24s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-257600
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-257600: exit status 85 (235.7891ms)

                                                
                                                
-- stdout --
	* Profile "addons-257600" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-257600"

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 16:21:42.140119    3116 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.24s)

                                                
                                    
x
+
TestCertOptions (436.16s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-104100 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-104100 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (6m18.4894466s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-104100 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-104100 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (9.2182023s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-104100 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-104100 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-104100 -- "sudo cat /etc/kubernetes/admin.conf": (8.9262504s)
helpers_test.go:175: Cleaning up "cert-options-104100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-104100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-104100: (39.3973815s)
--- PASS: TestCertOptions (436.16s)

                                                
                                    
x
+
TestCertExpiration (715.55s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-396200 --memory=2048 --cert-expiration=3m --driver=hyperv
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-396200 --memory=2048 --cert-expiration=3m --driver=hyperv: (3m54.8529906s)
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-396200 --memory=2048 --cert-expiration=8760h --driver=hyperv
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-396200 --memory=2048 --cert-expiration=8760h --driver=hyperv: (4m19.946553s)
helpers_test.go:175: Cleaning up "cert-expiration-396200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-396200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-396200: (40.735068s)
--- PASS: TestCertExpiration (715.55s)

                                                
                                    
x
+
TestDockerFlags (338.61s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-442400 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-442400 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (4m35.6257815s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-442400 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-442400 ssh "sudo systemctl show docker --property=Environment --no-pager": (9.046677s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-442400 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-442400 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (9.0727627s)
helpers_test.go:175: Cleaning up "docker-flags-442400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-442400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-442400: (44.8681591s)
--- PASS: TestDockerFlags (338.61s)

                                                
                                    
x
+
TestForceSystemdFlag (489.41s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-833900 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-833900 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (7m20.8945754s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-833900 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-833900 ssh "docker info --format {{.CgroupDriver}}": (9.1358359s)
helpers_test.go:175: Cleaning up "force-systemd-flag-833900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-833900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-833900: (39.3787532s)
--- PASS: TestForceSystemdFlag (489.41s)

                                                
                                    
x
+
TestErrorSpam/start (15.15s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 start --dry-run: (5.0302889s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 start --dry-run: (5.0322512s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 start --dry-run: (5.0875099s)
--- PASS: TestErrorSpam/start (15.15s)

                                                
                                    
x
+
TestErrorSpam/status (33.4s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 status: exit status 6 (11.4985969s)

                                                
                                                
-- stdout --
	nospam-199300
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 16:29:46.985712    8276 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0416 16:29:58.316541    8276 status.go:417] kubeconfig endpoint: get endpoint: "nospam-199300" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\\Users\\jenkins.minikube5\\AppData\\Local\\Temp\\nospam-199300 status" failed: exit status 6
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 status: exit status 6 (11.0734871s)

                                                
                                                
-- stdout --
	nospam-199300
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 16:29:58.469927    8488 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0416 16:30:09.393152    8488 status.go:417] kubeconfig endpoint: get endpoint: "nospam-199300" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\\Users\\jenkins.minikube5\\AppData\\Local\\Temp\\nospam-199300 status" failed: exit status 6
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 status: exit status 6 (10.828835s)

                                                
                                                
-- stdout --
	nospam-199300
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 16:30:09.560505    8360 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0416 16:30:20.222386    8360 status.go:417] kubeconfig endpoint: get endpoint: "nospam-199300" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:184: "out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\\Users\\jenkins.minikube5\\AppData\\Local\\Temp\\nospam-199300 status" failed: exit status 6
--- PASS: TestErrorSpam/status (33.40s)

                                                
                                    
x
+
TestErrorSpam/pause (23.87s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 pause: exit status 80 (7.5199898s)

                                                
                                                
-- stdout --
	* Pausing node nospam-199300 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 16:30:20.372787    1936 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_delete_2fbec3859a6ca3e01399e1c77f10a046aa20f4c7_6.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\\Users\\jenkins.minikube5\\AppData\\Local\\Temp\\nospam-199300 pause" failed: exit status 80
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 pause: exit status 80 (8.3188914s)

                                                
                                                
-- stdout --
	* Pausing node nospam-199300 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 16:30:27.891385   12760 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_delete_2fbec3859a6ca3e01399e1c77f10a046aa20f4c7_6.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\\Users\\jenkins.minikube5\\AppData\\Local\\Temp\\nospam-199300 pause" failed: exit status 80
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 pause: exit status 80 (8.0282696s)

                                                
                                                
-- stdout --
	* Pausing node nospam-199300 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 16:30:36.216635    8948 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_delete_2fbec3859a6ca3e01399e1c77f10a046aa20f4c7_6.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:184: "out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\\Users\\jenkins.minikube5\\AppData\\Local\\Temp\\nospam-199300 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (23.87s)

                                                
                                    
x
+
TestErrorSpam/unpause (168.7s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 unpause: exit status 80 (48.2025781s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-199300 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 16:30:44.245064   13988 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: docker: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format=<no value>: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_delete_2fbec3859a6ca3e01399e1c77f10a046aa20f4c7_6.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\\Users\\jenkins.minikube5\\AppData\\Local\\Temp\\nospam-199300 unpause" failed: exit status 80
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 unpause: exit status 80 (1m0.2556905s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-199300 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 16:31:32.453821    9156 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: docker: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format=<no value>: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_delete_2fbec3859a6ca3e01399e1c77f10a046aa20f4c7_6.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\\Users\\jenkins.minikube5\\AppData\\Local\\Temp\\nospam-199300 unpause" failed: exit status 80
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 unpause: exit status 80 (1m0.2411927s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-199300 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 16:32:32.717077   12880 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: docker: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format=<no value>: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_delete_2fbec3859a6ca3e01399e1c77f10a046aa20f4c7_6.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:184: "out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\\Users\\jenkins.minikube5\\AppData\\Local\\Temp\\nospam-199300 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (168.70s)

                                                
                                    
x
+
TestErrorSpam/stop (87.63s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 stop: (1m8.1799885s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 stop: (9.6707048s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199300 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-199300 stop: (9.7753811s)
--- PASS: TestErrorSpam/stop (87.63s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\5324\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (189.94s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-538700 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-538700 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m9.9282437s)
--- PASS: TestFunctional/serial/StartWithProxy (189.94s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (117.69s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-538700 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-538700 --alsologtostderr -v=8: (1m57.6859787s)
functional_test.go:659: soft start took 1m57.6881026s for "functional-538700" cluster.
--- PASS: TestFunctional/serial/SoftStart (117.69s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.11s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-538700 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (23.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 cache add registry.k8s.io/pause:3.1: (7.9698907s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 cache add registry.k8s.io/pause:3.3: (7.8423789s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 cache add registry.k8s.io/pause:latest: (7.8639937s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (23.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (9.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-538700 C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1527141933\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-538700 C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1527141933\001: (1.5592273s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 cache add minikube-local-cache-test:functional-538700
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 cache add minikube-local-cache-test:functional-538700: (7.4654607s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 cache delete minikube-local-cache-test:functional-538700
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-538700
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (9.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (8.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 ssh sudo crictl images: (8.4450584s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (8.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (32.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 ssh sudo docker rmi registry.k8s.io/pause:latest: (8.5507015s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-538700 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (8.3923654s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 16:41:14.337039   10008 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 cache reload: (7.2228195s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (8.3999166s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (32.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.45s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 kubectl -- --context functional-538700 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.40s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (143.76s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-538700 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-538700 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m23.7607423s)
functional_test.go:757: restart took 2m23.7607423s for "functional-538700" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (143.76s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-538700 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.16s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (7.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 logs: (7.4385845s)
--- PASS: TestFunctional/serial/LogsCmd (7.44s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (9.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 logs --file C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3523608613\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 logs --file C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3523608613\001\logs.txt: (9.3405243s)
--- PASS: TestFunctional/serial/LogsFileCmd (9.34s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (19.67s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-538700 apply -f testdata\invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-538700
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-538700: exit status 115 (15.0658566s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://172.19.95.169:31413 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 16:44:53.390526    4904 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_service_d27a1c5599baa2f8050d003f41b0266333639286_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-538700 delete -f testdata\invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-538700 delete -f testdata\invalidsvc.yaml: (1.2724587s)
--- PASS: TestFunctional/serial/InvalidService (19.67s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (37.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 status: (11.6983586s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (12.3198635s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 status -o json: (13.6036621s)
--- PASS: TestFunctional/parallel/StatusCmd (37.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (24.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-538700 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-538700 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-z8nr9" [5f594500-89db-4b6d-842e-412d41c2bf66] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-z8nr9" [5f594500-89db-4b6d-842e-412d41c2bf66] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.0205923s
functional_test.go:1645: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 service hello-node-connect --url
functional_test.go:1645: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 service hello-node-connect --url: (16.6819238s)
functional_test.go:1651: found endpoint for hello-node-connect: http://172.19.95.169:31835
functional_test.go:1671: http://172.19.95.169:31835: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-z8nr9

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.19.95.169:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.19.95.169:31835
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (24.05s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (36.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [2526ffa5-f4ff-4859-9389-2b1bde0ea350] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0095191s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-538700 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-538700 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-538700 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-538700 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5b71b331-aa68-482d-8209-80499f4cec71] Pending
helpers_test.go:344: "sp-pod" [5b71b331-aa68-482d-8209-80499f4cec71] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5b71b331-aa68-482d-8209-80499f4cec71] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.0224333s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-538700 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-538700 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-538700 delete -f testdata/storage-provisioner/pod.yaml: (1.0130589s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-538700 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [eca81542-926a-42c6-a1bd-8743d00396ff] Pending
helpers_test.go:344: "sp-pod" [eca81542-926a-42c6-a1bd-8743d00396ff] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [eca81542-926a-42c6-a1bd-8743d00396ff] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.0142374s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-538700 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (36.96s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (17.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 ssh "echo hello": (9.1564704s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 ssh "cat /etc/hostname": (8.7447278s)
--- PASS: TestFunctional/parallel/SSHCmd (17.90s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (56.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 cp testdata\cp-test.txt /home/docker/cp-test.txt: (8.4930806s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 ssh -n functional-538700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 ssh -n functional-538700 "sudo cat /home/docker/cp-test.txt": (10.1636063s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 cp functional-538700:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalparallelCpCmd2638602116\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 cp functional-538700:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalparallelCpCmd2638602116\001\cp-test.txt: (9.6131086s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 ssh -n functional-538700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 ssh -n functional-538700 "sudo cat /home/docker/cp-test.txt": (9.9125858s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (7.7970992s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 ssh -n functional-538700 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 ssh -n functional-538700 "sudo cat /tmp/does/not/exist/cp-test.txt": (10.7064616s)
--- PASS: TestFunctional/parallel/CpCmd (56.69s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (50.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-538700 replace --force -f testdata\mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-kfvqg" [56a376b9-0a93-4a12-bf8a-97472428a95f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-kfvqg" [56a376b9-0a93-4a12-bf8a-97472428a95f] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 43.0181841s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-538700 exec mysql-859648c796-kfvqg -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-538700 exec mysql-859648c796-kfvqg -- mysql -ppassword -e "show databases;": exit status 1 (285.8314ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-538700 exec mysql-859648c796-kfvqg -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-538700 exec mysql-859648c796-kfvqg -- mysql -ppassword -e "show databases;": exit status 1 (465.6397ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-538700 exec mysql-859648c796-kfvqg -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-538700 exec mysql-859648c796-kfvqg -- mysql -ppassword -e "show databases;": exit status 1 (368.5596ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-538700 exec mysql-859648c796-kfvqg -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (50.99s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (9.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/5324/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 ssh "sudo cat /etc/test/nested/copy/5324/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 ssh "sudo cat /etc/test/nested/copy/5324/hosts": (9.8375347s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (9.84s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (61.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/5324.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 ssh "sudo cat /etc/ssl/certs/5324.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 ssh "sudo cat /etc/ssl/certs/5324.pem": (11.0168677s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/5324.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 ssh "sudo cat /usr/share/ca-certificates/5324.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 ssh "sudo cat /usr/share/ca-certificates/5324.pem": (9.7746029s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 ssh "sudo cat /etc/ssl/certs/51391683.0": (9.7834613s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/53242.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 ssh "sudo cat /etc/ssl/certs/53242.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 ssh "sudo cat /etc/ssl/certs/53242.pem": (9.8954265s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/53242.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 ssh "sudo cat /usr/share/ca-certificates/53242.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 ssh "sudo cat /usr/share/ca-certificates/53242.pem": (10.3224634s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (10.611918s)
--- PASS: TestFunctional/parallel/CertSync (61.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-538700 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (11.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-538700 ssh "sudo systemctl is-active crio": exit status 1 (11.0640524s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 16:45:11.456872    2008 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (11.06s)

                                                
                                    
x
+
TestFunctional/parallel/License (2.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (2.6241118s)
--- PASS: TestFunctional/parallel/License (2.64s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 version --short
--- PASS: TestFunctional/parallel/Version/short (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (7.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 version -o=json --components: (7.428842s)
--- PASS: TestFunctional/parallel/Version/components (7.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (7.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 image ls --format short --alsologtostderr: (7.3500935s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-538700 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-538700
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-538700
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-538700 image ls --format short --alsologtostderr:
W0416 16:48:02.083677    8272 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0416 16:48:02.157530    8272 out.go:291] Setting OutFile to fd 984 ...
I0416 16:48:02.158147    8272 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 16:48:02.158147    8272 out.go:304] Setting ErrFile to fd 884...
I0416 16:48:02.158147    8272 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 16:48:02.172194    8272 config.go:182] Loaded profile config "functional-538700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0416 16:48:02.172465    8272 config.go:182] Loaded profile config "functional-538700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0416 16:48:02.172465    8272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-538700 ).state
I0416 16:48:04.263983    8272 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 16:48:04.263983    8272 main.go:141] libmachine: [stderr =====>] : 
I0416 16:48:04.280802    8272 ssh_runner.go:195] Run: systemctl --version
I0416 16:48:04.281206    8272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-538700 ).state
I0416 16:48:06.449398    8272 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 16:48:06.449489    8272 main.go:141] libmachine: [stderr =====>] : 
I0416 16:48:06.449559    8272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-538700 ).networkadapters[0]).ipaddresses[0]
I0416 16:48:09.122738    8272 main.go:141] libmachine: [stdout =====>] : 172.19.95.169

                                                
                                                
I0416 16:48:09.122738    8272 main.go:141] libmachine: [stderr =====>] : 
I0416 16:48:09.122738    8272 sshutil.go:53] new ssh client: &{IP:172.19.95.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-538700\id_rsa Username:docker}
I0416 16:48:09.230801    8272 ssh_runner.go:235] Completed: systemctl --version: (4.9497175s)
I0416 16:48:09.240799    8272 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (7.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 image ls --format table --alsologtostderr: (6.99525s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-538700 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/library/minikube-local-cache-test | functional-538700 | 32ae46a960e1f | 30B    |
| docker.io/library/nginx                     | alpine            | e289a478ace02 | 42.6MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| gcr.io/google-containers/addon-resizer      | functional-538700 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-proxy                  | v1.29.3           | a1d263b5dc5b0 | 82.4MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-apiserver              | v1.29.3           | 39f995c9f1996 | 127MB  |
| registry.k8s.io/kube-scheduler              | v1.29.3           | 8c390d98f50c0 | 59.6MB |
| registry.k8s.io/kube-controller-manager     | v1.29.3           | 6052a25da3f97 | 122MB  |
| docker.io/library/nginx                     | latest            | c613f16b66424 | 187MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-538700 image ls --format table --alsologtostderr:
W0416 16:48:09.438733   14124 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0416 16:48:09.503415   14124 out.go:291] Setting OutFile to fd 796 ...
I0416 16:48:09.504302   14124 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 16:48:09.504302   14124 out.go:304] Setting ErrFile to fd 760...
I0416 16:48:09.504302   14124 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 16:48:09.522883   14124 config.go:182] Loaded profile config "functional-538700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0416 16:48:09.522883   14124 config.go:182] Loaded profile config "functional-538700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0416 16:48:09.523897   14124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-538700 ).state
I0416 16:48:11.613385   14124 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 16:48:11.613427   14124 main.go:141] libmachine: [stderr =====>] : 
I0416 16:48:11.622577   14124 ssh_runner.go:195] Run: systemctl --version
I0416 16:48:11.622577   14124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-538700 ).state
I0416 16:48:13.675937   14124 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 16:48:13.675937   14124 main.go:141] libmachine: [stderr =====>] : 
I0416 16:48:13.675937   14124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-538700 ).networkadapters[0]).ipaddresses[0]
I0416 16:48:16.120197   14124 main.go:141] libmachine: [stdout =====>] : 172.19.95.169

                                                
                                                
I0416 16:48:16.120197   14124 main.go:141] libmachine: [stderr =====>] : 
I0416 16:48:16.120752   14124 sshutil.go:53] new ssh client: &{IP:172.19.95.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-538700\id_rsa Username:docker}
I0416 16:48:16.242505   14124 ssh_runner.go:235] Completed: systemctl --version: (4.6196669s)
I0416 16:48:16.258715   14124 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (7.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (7.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 image ls --format json --alsologtostderr: (7.3920718s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-538700 image ls --format json --alsologtostderr:
[{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.29.3"],"size":"122000000"},{"id":"c613f16b664244b150d1c3644cbc387ec1fe8376377f9419992280eb4a82ff3b","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9
f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"32ae46a960e1f786b027837d284d5ecfe6e0b5290069a79f0065924dd47def08","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-538700"],"size":"30"},{"id":"39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.29.3"],"size":"127000000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.29.3"],"size":"82400000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/goog
le-containers/addon-resizer:functional-538700"],"size":"32900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.29.3"],"size":"59600000"},{"id":"e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-538700 image ls --format json --alsologtostderr:
W0416 16:48:02.091686   14120 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0416 16:48:02.165095   14120 out.go:291] Setting OutFile to fd 644 ...
I0416 16:48:02.178299   14120 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 16:48:02.178299   14120 out.go:304] Setting ErrFile to fd 648...
I0416 16:48:02.178508   14120 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 16:48:02.194487   14120 config.go:182] Loaded profile config "functional-538700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0416 16:48:02.195586   14120 config.go:182] Loaded profile config "functional-538700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0416 16:48:02.195893   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-538700 ).state
I0416 16:48:04.268993   14120 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 16:48:04.269090   14120 main.go:141] libmachine: [stderr =====>] : 
I0416 16:48:04.284971   14120 ssh_runner.go:195] Run: systemctl --version
I0416 16:48:04.284971   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-538700 ).state
I0416 16:48:06.459406   14120 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 16:48:06.459947   14120 main.go:141] libmachine: [stderr =====>] : 
I0416 16:48:06.460028   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-538700 ).networkadapters[0]).ipaddresses[0]
I0416 16:48:09.148018   14120 main.go:141] libmachine: [stdout =====>] : 172.19.95.169

                                                
                                                
I0416 16:48:09.148018   14120 main.go:141] libmachine: [stderr =====>] : 
I0416 16:48:09.148018   14120 sshutil.go:53] new ssh client: &{IP:172.19.95.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-538700\id_rsa Username:docker}
I0416 16:48:09.314150   14120 ssh_runner.go:235] Completed: systemctl --version: (5.0288944s)
I0416 16:48:09.321133   14120 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (7.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (7.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 image ls --format yaml --alsologtostderr: (7.0884359s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-538700 image ls --format yaml --alsologtostderr:
- id: 32ae46a960e1f786b027837d284d5ecfe6e0b5290069a79f0065924dd47def08
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-538700
size: "30"
- id: a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.29.3
size: "82400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.29.3
size: "127000000"
- id: 8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.29.3
size: "59600000"
- id: 6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.29.3
size: "122000000"
- id: e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: c613f16b664244b150d1c3644cbc387ec1fe8376377f9419992280eb4a82ff3b
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-538700
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-538700 image ls --format yaml --alsologtostderr:
W0416 16:48:09.493185    9324 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0416 16:48:09.553891    9324 out.go:291] Setting OutFile to fd 880 ...
I0416 16:48:09.565885    9324 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 16:48:09.565885    9324 out.go:304] Setting ErrFile to fd 568...
I0416 16:48:09.565885    9324 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 16:48:09.581886    9324 config.go:182] Loaded profile config "functional-538700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0416 16:48:09.581886    9324 config.go:182] Loaded profile config "functional-538700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0416 16:48:09.581886    9324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-538700 ).state
I0416 16:48:11.730568    9324 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 16:48:11.730725    9324 main.go:141] libmachine: [stderr =====>] : 
I0416 16:48:11.740957    9324 ssh_runner.go:195] Run: systemctl --version
I0416 16:48:11.740957    9324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-538700 ).state
I0416 16:48:13.821527    9324 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 16:48:13.821686    9324 main.go:141] libmachine: [stderr =====>] : 
I0416 16:48:13.821751    9324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-538700 ).networkadapters[0]).ipaddresses[0]
I0416 16:48:16.273984    9324 main.go:141] libmachine: [stdout =====>] : 172.19.95.169

                                                
                                                
I0416 16:48:16.273984    9324 main.go:141] libmachine: [stderr =====>] : 
I0416 16:48:16.274605    9324 sshutil.go:53] new ssh client: &{IP:172.19.95.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-538700\id_rsa Username:docker}
I0416 16:48:16.394544    9324 ssh_runner.go:235] Completed: systemctl --version: (4.6533225s)
I0416 16:48:16.401531    9324 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (7.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (23.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-538700 ssh pgrep buildkitd: exit status 1 (8.7291577s)

                                                
                                                
** stderr ** 
	W0416 16:48:16.426441    6480 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 image build -t localhost/my-image:functional-538700 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 image build -t localhost/my-image:functional-538700 testdata\build --alsologtostderr: (8.2891618s)
functional_test.go:319: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-538700 image build -t localhost/my-image:functional-538700 testdata\build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in d35939984d55
---> Removed intermediate container d35939984d55
---> 9283d7524bf6
Step 3/3 : ADD content.txt /
---> 99b342e74ec5
Successfully built 99b342e74ec5
Successfully tagged localhost/my-image:functional-538700
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-538700 image build -t localhost/my-image:functional-538700 testdata\build --alsologtostderr:
W0416 16:48:25.162267   10220 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0416 16:48:25.223283   10220 out.go:291] Setting OutFile to fd 884 ...
I0416 16:48:25.243815   10220 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 16:48:25.243815   10220 out.go:304] Setting ErrFile to fd 648...
I0416 16:48:25.243815   10220 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 16:48:25.265653   10220 config.go:182] Loaded profile config "functional-538700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0416 16:48:25.284043   10220 config.go:182] Loaded profile config "functional-538700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0416 16:48:25.285318   10220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-538700 ).state
I0416 16:48:27.255084   10220 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 16:48:27.255155   10220 main.go:141] libmachine: [stderr =====>] : 
I0416 16:48:27.272277   10220 ssh_runner.go:195] Run: systemctl --version
I0416 16:48:27.272277   10220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-538700 ).state
I0416 16:48:29.264368   10220 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0416 16:48:29.265190   10220 main.go:141] libmachine: [stderr =====>] : 
I0416 16:48:29.265254   10220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-538700 ).networkadapters[0]).ipaddresses[0]
I0416 16:48:31.609457   10220 main.go:141] libmachine: [stdout =====>] : 172.19.95.169

                                                
                                                
I0416 16:48:31.609613   10220 main.go:141] libmachine: [stderr =====>] : 
I0416 16:48:31.609666   10220 sshutil.go:53] new ssh client: &{IP:172.19.95.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-538700\id_rsa Username:docker}
I0416 16:48:31.718767   10220 ssh_runner.go:235] Completed: systemctl --version: (4.446116s)
I0416 16:48:31.718936   10220 build_images.go:161] Building image from path: C:\Users\jenkins.minikube5\AppData\Local\Temp\build.3078047804.tar
I0416 16:48:31.727452   10220 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0416 16:48:31.755106   10220 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3078047804.tar
I0416 16:48:31.762673   10220 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3078047804.tar: stat -c "%s %y" /var/lib/minikube/build/build.3078047804.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3078047804.tar': No such file or directory
I0416 16:48:31.762835   10220 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\AppData\Local\Temp\build.3078047804.tar --> /var/lib/minikube/build/build.3078047804.tar (3072 bytes)
I0416 16:48:31.819404   10220 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3078047804
I0416 16:48:31.847034   10220 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3078047804 -xf /var/lib/minikube/build/build.3078047804.tar
I0416 16:48:31.864548   10220 docker.go:360] Building image: /var/lib/minikube/build/build.3078047804
I0416 16:48:31.872024   10220 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-538700 /var/lib/minikube/build/build.3078047804
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0416 16:48:33.244957   10220 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-538700 /var/lib/minikube/build/build.3078047804: (1.3728549s)
I0416 16:48:33.258715   10220 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3078047804
I0416 16:48:33.296529   10220 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3078047804.tar
I0416 16:48:33.319919   10220 build_images.go:217] Built localhost/my-image:functional-538700 from C:\Users\jenkins.minikube5\AppData\Local\Temp\build.3078047804.tar
I0416 16:48:33.320911   10220 build_images.go:133] succeeded building to: functional-538700
I0416 16:48:33.320911   10220 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 image ls: (6.603859s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (23.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.9012141s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-538700
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (22.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 image load --daemon gcr.io/google-containers/addon-resizer:functional-538700 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 image load --daemon gcr.io/google-containers/addon-resizer:functional-538700 --alsologtostderr: (15.0414369s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 image ls: (7.492022s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (22.54s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (43.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-538700 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-538700"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-538700 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-538700": (27.8262785s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-538700 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-538700 docker-env | Invoke-Expression ; docker images": (15.6133027s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (43.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (19.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 image load --daemon gcr.io/google-containers/addon-resizer:functional-538700 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 image load --daemon gcr.io/google-containers/addon-resizer:functional-538700 --alsologtostderr: (11.7471653s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 image ls: (7.8854315s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (19.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (25.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.609077s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-538700
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 image load --daemon gcr.io/google-containers/addon-resizer:functional-538700 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 image load --daemon gcr.io/google-containers/addon-resizer:functional-538700 --alsologtostderr: (14.7631108s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 image ls: (7.2626931s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (25.85s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 update-context --alsologtostderr -v=2: (2.2602705s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 update-context --alsologtostderr -v=2: (2.289107s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 update-context --alsologtostderr -v=2: (2.3034281s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (46.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-538700 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-538700 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-n99f5" [477fbb93-2fe5-43d0-b941-28c1255d0f39] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-n99f5" [477fbb93-2fe5-43d0-b941-28c1255d0f39] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 46.0099518s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (46.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (9.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 image save gcr.io/google-containers/addon-resizer:functional-538700 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 image save gcr.io/google-containers/addon-resizer:functional-538700 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (9.9377634s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (9.94s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (7.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-538700 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-538700 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-538700 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1560: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 13760: TerminateProcess: Access is denied.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-538700 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (7.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (14.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 image rm gcr.io/google-containers/addon-resizer:functional-538700 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 image rm gcr.io/google-containers/addon-resizer:functional-538700 --alsologtostderr: (7.5052327s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 image ls: (7.1130726s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (14.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-538700 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (20.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-538700 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [cf5e3c0e-968f-4e1b-913f-88bbfe5b2e42] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [cf5e3c0e-968f-4e1b-913f-88bbfe5b2e42] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 20.0161362s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (20.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (15.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (8.4156195s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 image ls: (6.8712007s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (15.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (11.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 service list
functional_test.go:1455: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 service list: (11.9777047s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (11.98s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-538700 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 9672: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (10.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-538700
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 image save --daemon gcr.io/google-containers/addon-resizer:functional-538700 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 image save --daemon gcr.io/google-containers/addon-resizer:functional-538700 --alsologtostderr: (9.7176224s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-538700
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (10.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (12.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-538700 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-windows-amd64.exe -p functional-538700 service list -o json: (12.3650042s)
functional_test.go:1490: Took "12.3650042s" to run "out/minikube-windows-amd64.exe -p functional-538700 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (12.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (9.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (9.453391s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (9.86s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (10.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (10.5930902s)
functional_test.go:1311: Took "10.5931724s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "206.1601ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (10.80s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (9.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (9.6555305s)
functional_test.go:1362: Took "9.6556067s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "214.2669ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (9.87s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.39s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-538700
--- PASS: TestFunctional/delete_addon-resizer_images (0.39s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.16s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-538700
--- PASS: TestFunctional/delete_my-image_image (0.16s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-538700
--- PASS: TestFunctional/delete_minikube_cached_images (0.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-022600 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.16s)

                                                
                                    
x
+
TestJSONOutput/start/Command (191.33s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-513200 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-513200 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m11.3261182s)
--- PASS: TestJSONOutput/start/Command (191.33s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (7.11s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-513200 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-513200 --output=json --user=testUser: (7.1086628s)
--- PASS: TestJSONOutput/pause/Command (7.11s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (6.96s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-513200 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-513200 --output=json --user=testUser: (6.9572022s)
--- PASS: TestJSONOutput/unpause/Command (6.96s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (37.52s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-513200 --output=json --user=testUser
E0416 17:36:06.929486    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-513200 --output=json --user=testUser: (37.5235869s)
--- PASS: TestJSONOutput/stop/Command (37.52s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-762300 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-762300 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (201.5137ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ddcfbf30-e3f6-4fcd-befd-c1e5edb2390a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-762300] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7c500a86-309c-44ca-a7bf-0e5efb477a54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube5\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"575e341f-6385-4cb9-8f0a-dd8ada0e56b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"10c2b47c-cce2-4023-809c-7c615dc8b1a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"21634f2b-e3a4-4ba5-9081-7c7da11dbd21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18649"}}
	{"specversion":"1.0","id":"9bc5384f-194f-479d-8f8e-d421344eba1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ef594065-23e4-461b-88dd-b97a2e23cbe4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 17:36:34.110680    9252 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-762300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-762300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-762300: (1.0508746s)
--- PASS: TestErrorJSONOutput (1.25s)

                                                
                                    
x
+
TestMainNoArgs (0.2s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.20s)

                                                
                                    
x
+
TestMinikubeProfile (479.74s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-749400 --driver=hyperv
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-749400 --driver=hyperv: (2m57.1348556s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-749400 --driver=hyperv
E0416 17:41:06.948357    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-749400 --driver=hyperv: (3m2.0715658s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-749400
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (17.2455638s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-749400
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (17.2398561s)
helpers_test.go:175: Cleaning up "second-749400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-749400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-749400: (45.2311779s)
helpers_test.go:175: Cleaning up "first-749400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-749400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-749400: (40.0696399s)
--- PASS: TestMinikubeProfile (479.74s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (141.73s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-738600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0416 17:45:50.188876    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
E0416 17:46:06.959014    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-738600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m20.7235452s)
--- PASS: TestMountStart/serial/StartWithMountFirst (141.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (8.72s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-738600 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-738600 ssh -- ls /minikube-host: (8.7208884s)
--- PASS: TestMountStart/serial/VerifyMountFirst (8.72s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (141.84s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-738600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-738600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m20.8299576s)
--- PASS: TestMountStart/serial/StartWithMountSecond (141.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (8.69s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-738600 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-738600 ssh -- ls /minikube-host: (8.6944522s)
--- PASS: TestMountStart/serial/VerifyMountSecond (8.69s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (25.32s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-738600 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-738600 --alsologtostderr -v=5: (25.3167602s)
--- PASS: TestMountStart/serial/DeleteFirst (25.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (8.92s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-738600 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-738600 ssh -- ls /minikube-host: (8.9248045s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (8.92s)

                                                
                                    
x
+
TestMountStart/serial/Stop (27.82s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-738600
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-738600: (27.8247845s)
--- PASS: TestMountStart/serial/Stop (27.82s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (386.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-945500 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0416 17:56:07.002661    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-945500 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (6m4.4367411s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-945500 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-945500 status --alsologtostderr: (21.6105512s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (386.05s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-945500 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-945500 -- rollout status deployment/busybox
E0416 18:01:07.020285    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-945500 -- rollout status deployment/busybox: (2.4485252s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-945500 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-945500 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-945500 -- exec busybox-7fdf7869d9-jxvx2 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-945500 -- exec busybox-7fdf7869d9-jxvx2 -- nslookup kubernetes.io: (1.9097235s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-945500 -- exec busybox-7fdf7869d9-ns8nx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-945500 -- exec busybox-7fdf7869d9-jxvx2 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-945500 -- exec busybox-7fdf7869d9-ns8nx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-945500 -- exec busybox-7fdf7869d9-jxvx2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-945500 -- exec busybox-7fdf7869d9-ns8nx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.87s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-945500 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (8.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (8.6645604s)
--- PASS: TestMultiNode/serial/ProfileList (8.66s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (73.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-945500 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-945500 node stop m03: (26.6913787s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-945500 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-945500 status: exit status 7 (23.2105495s)

                                                
                                                
-- stdout --
	multinode-945500
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-945500-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-945500-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 18:07:34.580701    6000 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-945500 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-945500 status --alsologtostderr: exit status 7 (23.4106475s)

                                                
                                                
-- stdout --
	multinode-945500
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-945500-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-945500-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 18:07:57.794509    3928 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0416 18:07:57.849266    3928 out.go:291] Setting OutFile to fd 816 ...
	I0416 18:07:57.849266    3928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 18:07:57.850264    3928 out.go:304] Setting ErrFile to fd 880...
	I0416 18:07:57.850264    3928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 18:07:57.869250    3928 out.go:298] Setting JSON to false
	I0416 18:07:57.869250    3928 mustload.go:65] Loading cluster: multinode-945500
	I0416 18:07:57.869250    3928 notify.go:220] Checking for updates...
	I0416 18:07:57.869250    3928 config.go:182] Loaded profile config "multinode-945500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 18:07:57.869250    3928 status.go:255] checking status of multinode-945500 ...
	I0416 18:07:57.870245    3928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:07:59.805822    3928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:07:59.805908    3928 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:07:59.805908    3928 status.go:330] multinode-945500 host status = "Running" (err=<nil>)
	I0416 18:07:59.805987    3928 host.go:66] Checking if "multinode-945500" exists ...
	I0416 18:07:59.806157    3928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:08:01.723680    3928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:08:01.724581    3928 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:08:01.724645    3928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:08:04.006547    3928 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 18:08:04.006547    3928 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:08:04.007184    3928 host.go:66] Checking if "multinode-945500" exists ...
	I0416 18:08:04.016490    3928 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 18:08:04.016490    3928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500 ).state
	I0416 18:08:05.978540    3928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:08:05.978540    3928 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:08:05.978540    3928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500 ).networkadapters[0]).ipaddresses[0]
	I0416 18:08:08.309968    3928 main.go:141] libmachine: [stdout =====>] : 172.19.91.227
	
	I0416 18:08:08.309968    3928 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:08:08.311079    3928 sshutil.go:53] new ssh client: &{IP:172.19.91.227 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500\id_rsa Username:docker}
	I0416 18:08:08.420313    3928 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.4035727s)
	I0416 18:08:08.429867    3928 ssh_runner.go:195] Run: systemctl --version
	I0416 18:08:08.448979    3928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 18:08:08.472028    3928 kubeconfig.go:125] found "multinode-945500" server: "https://172.19.91.227:8443"
	I0416 18:08:08.472111    3928 api_server.go:166] Checking apiserver status ...
	I0416 18:08:08.479845    3928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 18:08:08.508686    3928 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2058/cgroup
	W0416 18:08:08.526595    3928 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2058/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 18:08:08.538594    3928 ssh_runner.go:195] Run: ls
	I0416 18:08:08.544387    3928 api_server.go:253] Checking apiserver healthz at https://172.19.91.227:8443/healthz ...
	I0416 18:08:08.552263    3928 api_server.go:279] https://172.19.91.227:8443/healthz returned 200:
	ok
	I0416 18:08:08.552263    3928 status.go:422] multinode-945500 apiserver status = Running (err=<nil>)
	I0416 18:08:08.552263    3928 status.go:257] multinode-945500 status: &{Name:multinode-945500 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 18:08:08.552263    3928 status.go:255] checking status of multinode-945500-m02 ...
	I0416 18:08:08.553206    3928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:08:10.523130    3928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:08:10.523968    3928 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:08:10.523968    3928 status.go:330] multinode-945500-m02 host status = "Running" (err=<nil>)
	I0416 18:08:10.524169    3928 host.go:66] Checking if "multinode-945500-m02" exists ...
	I0416 18:08:10.525141    3928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:08:12.475663    3928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:08:12.475663    3928 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:08:12.475663    3928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:08:14.773270    3928 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 18:08:14.773270    3928 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:08:14.773270    3928 host.go:66] Checking if "multinode-945500-m02" exists ...
	I0416 18:08:14.783217    3928 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 18:08:14.783217    3928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m02 ).state
	I0416 18:08:16.739945    3928 main.go:141] libmachine: [stdout =====>] : Running
	
	I0416 18:08:16.740871    3928 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:08:16.740871    3928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-945500-m02 ).networkadapters[0]).ipaddresses[0]
	I0416 18:08:19.014998    3928 main.go:141] libmachine: [stdout =====>] : 172.19.91.6
	
	I0416 18:08:19.015718    3928 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:08:19.016023    3928 sshutil.go:53] new ssh client: &{IP:172.19.91.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-945500-m02\id_rsa Username:docker}
	I0416 18:08:19.111717    3928 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.3282545s)
	I0416 18:08:19.120206    3928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 18:08:19.150690    3928 status.go:257] multinode-945500-m02 status: &{Name:multinode-945500-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0416 18:08:19.150806    3928 status.go:255] checking status of multinode-945500-m03 ...
	I0416 18:08:19.151713    3928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-945500-m03 ).state
	I0416 18:08:21.071786    3928 main.go:141] libmachine: [stdout =====>] : Off
	
	I0416 18:08:21.071786    3928 main.go:141] libmachine: [stderr =====>] : 
	I0416 18:08:21.071786    3928 status.go:330] multinode-945500-m03 host status = "Stopped" (err=<nil>)
	I0416 18:08:21.071869    3928 status.go:343] host is not running, skipping remaining checks
	I0416 18:08:21.071869    3928 status.go:257] multinode-945500-m03 status: &{Name:multinode-945500-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (73.31s)

                                                
                                    
x
+
TestPreload (493.71s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-301700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-301700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m4.8329943s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-301700 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-301700 image pull gcr.io/k8s-minikube/busybox: (7.4687312s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-301700
E0416 18:31:07.115913    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-301700: (38.5136746s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-301700 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-301700 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m36.172424s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-301700 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-301700 image list: (6.6328956s)
helpers_test.go:175: Cleaning up "test-preload-301700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-301700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-301700: (40.0878031s)
--- PASS: TestPreload (493.71s)

                                                
                                    
x
+
TestScheduledStopWindows (307.46s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-849000 --memory=2048 --driver=hyperv
E0416 18:35:50.389746    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
E0416 18:36:07.129332    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-849000 --memory=2048 --driver=hyperv: (2m59.1114728s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-849000 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-849000 --schedule 5m: (9.7651233s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-849000 -n scheduled-stop-849000
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-849000 -n scheduled-stop-849000: exit status 1 (10.0218223s)

                                                
                                                
** stderr ** 
	W0416 18:38:11.183453   13696 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-849000 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-849000 -- sudo systemctl show minikube-scheduled-stop --no-page: (8.6410007s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-849000 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-849000 --schedule 5s: (9.6444382s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-849000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-849000: exit status 7 (2.1371855s)

                                                
                                                
-- stdout --
	scheduled-stop-849000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 18:39:39.513552    6544 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-849000 -n scheduled-stop-849000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-849000 -n scheduled-stop-849000: exit status 7 (2.157286s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 18:39:41.653707    1824 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-849000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-849000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-849000: (25.9598091s)
--- PASS: TestScheduledStopWindows (307.46s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (882.62s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.2128561871.exe start -p running-upgrade-360500 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.2128561871.exe start -p running-upgrade-360500 --memory=2200 --vm-driver=hyperv: (6m27.8787751s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-360500 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0416 18:52:30.448774    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-360500 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (7m5.5504052s)
helpers_test.go:175: Cleaning up "running-upgrade-360500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-360500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-360500: (1m8.2827433s)
--- PASS: TestRunningBinaryUpgrade (882.62s)

                                                
                                    
x
+
TestKubernetesUpgrade (1156.34s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-833900 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-833900 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv: (5m19.8576875s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-833900
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-833900: (37.5431834s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-833900 status --format={{.Host}}
E0416 18:46:07.161803    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-833900 status --format={{.Host}}: exit status 7 (2.2233103s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 18:46:07.200310    1728 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-833900 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-833900 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=hyperv: (6m51.9123522s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-833900 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-833900 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-833900 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv: exit status 106 (231.162ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-833900] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 18:53:01.494510   10008 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-833900
	    minikube start -p kubernetes-upgrade-833900 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8339002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-833900 --kubernetes-version=v1.30.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-833900 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-833900 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=hyperv: (5m33.6247076s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-833900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-833900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-833900: (50.8073095s)
--- PASS: TestKubernetesUpgrade (1156.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-833900 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-833900 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (276.6722ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-833900] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 18:40:09.799520    3920 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (720.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.1537143581.exe start -p stopped-upgrade-280600 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.1537143581.exe start -p stopped-upgrade-280600 --memory=2200 --vm-driver=hyperv: (4m46.9676472s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.1537143581.exe -p stopped-upgrade-280600 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.1537143581.exe -p stopped-upgrade-280600 stop: (33.9468681s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-280600 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0416 18:51:07.188665    5324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-538700\client.crt: The system cannot find the path specified.
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-280600 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (6m39.6023289s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (720.52s)

                                                
                                    
x
+
TestPause/serial/Start (421.62s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-334400 --memory=2048 --install-addons=false --wait=all --driver=hyperv
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-334400 --memory=2048 --install-addons=false --wait=all --driver=hyperv: (7m1.6202178s)
--- PASS: TestPause/serial/Start (421.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (8.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-280600
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-280600: (8.991015s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (8.99s)

                                                
                                    

Test skip (30/195)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-538700 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-538700 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 7484: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-538700 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-538700 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.027601s)

                                                
                                                
-- stdout --
	* [functional-538700] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 16:47:34.415846    5444 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0416 16:47:34.490852    5444 out.go:291] Setting OutFile to fd 568 ...
	I0416 16:47:34.491842    5444 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:47:34.491842    5444 out.go:304] Setting ErrFile to fd 896...
	I0416 16:47:34.491842    5444 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:47:34.518556    5444 out.go:298] Setting JSON to false
	I0416 16:47:34.522560    5444 start.go:129] hostinfo: {"hostname":"minikube5","uptime":23684,"bootTime":1713262370,"procs":208,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0416 16:47:34.522560    5444 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0416 16:47:34.524548    5444 out.go:177] * [functional-538700] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0416 16:47:34.525548    5444 notify.go:220] Checking for updates...
	I0416 16:47:34.525548    5444 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:47:34.526559    5444 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 16:47:34.527547    5444 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0416 16:47:34.527547    5444 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 16:47:34.528559    5444 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 16:47:34.529549    5444 config.go:182] Loaded profile config "functional-538700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:47:34.530546    5444 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-538700 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-538700 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0375672s)

                                                
                                                
-- stdout --
	* [functional-538700] minikube v1.33.0-beta.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0416 16:47:39.428582    8924 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0416 16:47:39.491148    8924 out.go:291] Setting OutFile to fd 1012 ...
	I0416 16:47:39.491984    8924 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:47:39.491984    8924 out.go:304] Setting ErrFile to fd 796...
	I0416 16:47:39.491984    8924 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:47:39.512041    8924 out.go:298] Setting JSON to false
	I0416 16:47:39.515277    8924 start.go:129] hostinfo: {"hostname":"minikube5","uptime":23689,"bootTime":1713262370,"procs":209,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0416 16:47:39.515277    8924 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0416 16:47:39.516748    8924 out.go:177] * [functional-538700] minikube v1.33.0-beta.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0416 16:47:39.516954    8924 notify.go:220] Checking for updates...
	I0416 16:47:39.517796    8924 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0416 16:47:39.518559    8924 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 16:47:39.519203    8924 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0416 16:47:39.519340    8924 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 16:47:39.520443    8924 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 16:47:39.522127    8924 config.go:182] Loaded profile config "functional-538700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 16:47:39.523219    8924 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard